lejdi koci

1.6K posts

lejdi koci

lejdi koci

@primary_key

cofounder https://t.co/jStLgJnySP, https://t.co/kDIiqqim5H, https://t.co/Comb1s9vut

Tirana Katılım Ekim 2010
235 Takip Edilen174 Takipçiler
lejdi koci retweetledi
Donald Tusk
Donald Tusk@donaldtusk·
Dear American friends, Europe is your closest ally, not your problem. And we have common enemies. At least that’s how it has been in the last 80 years. We need to stick to this, this is the only reasonable strategy of our common security. Unless something has changed.
English
12.6K
11.2K
83.2K
39.5M
lejdi koci
lejdi koci@primary_key·
@skirano Hi, Great work! Will you post a GH link anytime soon?
English
0
0
0
81
Pietro Schirano
Pietro Schirano@skirano·
Claude Opus's ability to orchestrate subagents is absolutely insane and deserves more attention. Watch Claude direct subagents to build an entire drawing app. 🧙‍♂️ If you like, I could share this code, which can basically solve any goals you present, step by step. Like Devin!
English
87
142
1.3K
188.9K
lejdi koci retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
Groq is a Radically Different kind of AI architecture Among the new crop of AI chip startups, Groq stands out with a radically different approach centered around its compiler technology for optimizing a minimalist yet high-performance architecture. Groq's secret sauce is this compiler-first method that shuns complexity in favor of tailored efficiency. At the heart of Groq’s architecture is an almost surprisingly bare-bones design that does away with unnecessary logic in favor of raw parallel throughput. The hardware itself is comparable to an ASIC – an application-specific integrated circuit finely tuned for machine learning. However, unlike a fixed-function ASIC, Groq leverages a custom compiler that can adapt and optimize across different models. It is this combination of a streamlined architecture and an intelligent compiler that sets Groq apart. The key insight is that many AI chips stack components, like GPUs, that bring extraneous hardware and bloat. Groq returns to first principles, recognizing that machine learning workloads are about massive parallelism over simple data types and operations. By eliminating generic hardware and even concepts like locality, the design maximizes throughput and efficiency. This is enabled by Groq’s compiler that sits between software frameworks like TensorFlow and the hardware. The compiler analyzes and optimizes neural network graphs, tailoring and mapping them to the underlying architecture for accelerated execution. It breaks computations into the smallest operations to unlock parallelism. The compiler also enables capabilities like batch size 1 inference that ensures all hardware is usefully leveraged. Critically, Groq built its compiler before even finalizing the hardware design. The software insights directly informed the architecture. This co-design process allowed inference-specific optimization without legacy limitations. The compiler also provides deterministic guarantees of runtimes, enabling reliable scaling. Together, the Groq compiler and architecture form a streamlined, robust engine for machine learning inference. The innovative compiler-first methodology allows custom optimization that balances flexibility with performance. Rather than chasing complexity, Groq realizes less can be more when software and hardware align – a compelling recipe as AI workloads continue evolving.
Carlos E. Perez tweet media
English
100
670
3.9K
2.2M
lejdi koci retweetledi
James Campbell
James Campbell@jam3scampbell·
People are really bad at understanding just how big LLM's actually are. I think this is partly why they belittle them as 'just' next-word predictors
James Campbell tweet media
English
81
417
3.3K
587.2K
lejdi koci retweetledi
vicki
vicki@vboykis·
LLMs are so weird because one side is people with five PhDs who have been studying neuron activations for the past three decades and on the other side is someone called leetm5n with an anime avatar just casually releasing increasingly better performing fine tunes of mistral
English
56
355
3.7K
405.9K
lejdi koci
lejdi koci@primary_key·
@francoisfleuret Math as taught in Albanian unis has two problems. 1) remains at theoretical level 2) there is no “math for ai” book. Found tomyeh.info on linkedin, would love a book on math in his style.
English
0
0
0
21
François Fleuret
François Fleuret@francoisfleuret·
Those who say they lack the proper math background for AI, what are the pain points?
English
158
35
863
590.2K
lejdi koci
lejdi koci@primary_key·
@thealexker I don’t think this will crash all of them. They cannot do both AGI and chat with PDFs. Dedicated products will certainly add more features both functional and usability wise.
English
0
0
0
32
Alex Ker 🔭
Alex Ker 🔭@thealexker·
Many startups just died today. Because OpenAI added PDF chat. You can also chat with data files and other document types. We had a wave of products better suited as features rather than stand-alone companies. Wrappers are being squeezed by OpenAI on one side and incumbents on the other. It's a rough world out there.
Alex Ker 🔭 tweet media
English
313
675
5.4K
4.2M
lejdi koci
lejdi koci@primary_key·
@thdxr You may put that subquery in a temp table, add indexes and the use joins…we are doing all this time with MySql queries.
English
0
0
0
110
dax
dax@thdxr·
"sql is a declarative language" then tell me why i've been fighting the query planner for the last hour trying to get it to run a subquery first before other where clauses
English
56
9
397
109K
Nick Dobos
Nick Dobos@NickADobos·
seems to be a few competing ideas of what an ai agent is, here’s my take: 1. GPT + loop; autoGPT, babyAgi style Also called hop skip or multi hop agents 2. Prebuilt flows, conversations and scaffolding of code around a chatGPT call. Existing app + GPT. A calendar scheduling agent 3. Prompt engineering: prebuilt prompt buttons, Complex LLm chains, like chain of thought, RAG, and multi LLM systems 4. Autonomous agents, proactive goal setting 5. Tool agents, LLMs writing input/commands and code to APIs, dbs and other systems -2 and 3 are misnomers -4 is still sci fi -1 and 5 are the real innovations available
Nick Dobos tweet mediaNick Dobos tweet mediaNick Dobos tweet mediaNick Dobos tweet media
English
16
29
302
97.3K
lejdi koci
lejdi koci@primary_key·
@thekitze We evaluated Svelte, Solid & React. The winner for us is React + shadcn + zustand + zod + react-query. Quite a stack but porting our apps from our in house obviajs.com is strait forward and code looks clean.
English
0
0
0
45
kitze
kitze@thekitze·
hey useMemo and useCallback can useGoFuckThemselves for real. writing code like this is pure torture and we're gonna laugh about it one day
English
104
75
1.5K
377.1K
lejdi koci
lejdi koci@primary_key·
@pwang_szn Did something similiar in our company to notify us when software related bids are published in the albanian procurement platform 😎
English
0
0
0
517
peter! 🥷
peter! 🥷@pwang_szn·
ngl, Upwork is an absolute goldmine for ideas 💰
peter! 🥷 tweet mediapeter! 🥷 tweet mediapeter! 🥷 tweet media
English
56
242
3.3K
844.2K
lejdi koci retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
Introducing the RECONCILE framework: Overview - RECONCILE is a multi-agent framework that enables multiple diverse Large Language Models (LLMs) to engage in multi-round discussions and reach consensus on complex reasoning tasks. - It consists of 3 main phases: 1. Initial Response Generation 2. Multi-Round Discussion 3. Final Answer Generation Phase 1: Initial Response Generation - Given a reasoning task Q, each agent Ai generates: - An initial answer ai - An explanation ei - A confidence score pi indicating likelihood of answer being correct - The initial prompt instructs the agent to provide step-by-step reasoning. Phase 2: Multi-Round Discussion - RECONCILE facilitates R rounds of discussion between agents. - In each round r, the discussion prompt Di for agent Ai contains: - Grouped answers {aj} from previous round, summarized based on distinct responses - Explanations {ej} from previous round, grouped according to each answer - Confidence scores {pj} estimating other agents' uncertainties - Convincing samples Cj for each other agent Aj, consisting of human explanations that can rectify Aj's incorrect answers - Based on this, each agent Ai provides updated answer, explanation, and confidence score. - Goal is to convince other agents to reach better consensus. Convincing samples teach agents to generate persuasive explanations. Phase 3: Final Answer Generation - Discussion continues until reaching consensus or a maximum of R rounds. - Final answer is generated via weighted voting using confidence scores: - Rescale confidence scores pi to deal with overconfidence - Convert pi to weight wi - Take weighted vote across all agents' answers to determine final answer This multi-round discuss-and-convince approach with diverse LLMs improves reasoning capabilities.
Carlos E. Perez tweet media
English
3
36
158
20.5K
lejdi koci retweetledi
Physics In History
Physics In History@PhysInHistory·
I believe in intuition and inspiration. ... At times I feel certain I am right while not knowing the reason. When the eclipse of 1919 confirmed my intuition, I was not in the least surprised. In fact I would have been astonished had it turned out otherwise. Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution. It is, strictly speaking, a real factor in scientific research. -- as mentioned in Cosmic Religion : With Other Opinions and Aphorisms (1931) by Albert Einstein 📷A. Einstein photographed by Ben Meyer at Einstein's home in Santa Barbara, Caltech Archives Image
Physics In History tweet media
English
70
648
3.1K
332.6K
lejdi koci retweetledi
Physics In History
Physics In History@PhysInHistory·
"We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning." -- Werer Heisenberg (1901-1976)
Physics In History tweet media
English
92
877
4.2K
283.4K
lejdi koci retweetledi
Pietro Schirano
Pietro Schirano@skirano·
Mind blown! 🤯 New research shows models like Stable Diffusion secretly learns 3D geometry, even without any depth data. It's building a 3D game engine in its brain to realistically draw what we describe. arxiv.org/abs/2306.05720
English
11
97
553
84.5K
lejdi koci retweetledi
Melkey
Melkey@MelkeyDev·
It’s like assembling the Justice League 😂
Melkey tweet media
English
13
65
940
62K
Santiago
Santiago@svpino·
Scrum is a cancer. I've been writing software for 25 years, and nothing renders a software team useless like Scrum does. Some anecdotes: 1. They tried to convince me that Poker is a planning tool, not a game. 2. If you want to be more efficient, you must add process, not remove it. They had us attending the "ceremonies," a fancy name for a buttload of meetings: stand-ups, groomings, planning, retrospectives, and Scrum of Scrums. We spent more time talking than doing. 3. We prohibited laptops in meetings. We had to stand. We passed a ball around to keep everyone paying attention. 4. We spent more time estimating story points than writing software. Story points measure complexity, not time, but we had to decide how many story points fit in a sprint. 5. I had to use t-shirt sizes to estimate software. 6. We measured how much it cost to deliver one story point and then wrote contracts where clients paid for a package of "500 story points." 7. Management lost it when they found that 500 story points in one project weren't the same as 500 story points on another project. We had many meetings to fix this. 8. Imagine having a manager, a scrum master, a product owner, and a tech lead. You had to answer to all of them and none simultaneously. 9. We paid people who told us whether we were "burning down points" fast enough. Weren't story points about complexity instead of time? Never mind. I believe in Agile, but this ain't agile. We brought professional Scrum trainers. We paid people from our team to get certified. We tried Scrum this way and that other way. We spent years doing it. The result was always the same: It didn't work. Scrum is a cancer that will eat your development team. Scrum is not for developers; it's another tool for managers to feel they are in control. But the best about Scrum are those who look you in the eye and tell you: "If it doesn't work for you, you are doing it wrong. Scrum is anything that works for your team." Sure it is.
Santiago tweet media
English
2K
4.3K
25K
4.8M
lejdi koci
lejdi koci@primary_key·
@teknium I couldn’t get Copilot produce good results in visual studio with a c# .net project. Strangely enough it was a dissapointment even compared to chatgpt3.5…
English
0
0
1
34
Teknium 🪽
Teknium 🪽@Teknium·
Fellow programmers I have another question, do you find Github Copilot to be useful even if you have chatgpt-4/code interpreter? Is it worth adding it back to my AI stack of ChatGPT+ & Midjourney?
English
136
6
228
124.8K