Kineteq.ai

3.6K posts

Kineteq.ai banner
Kineteq.ai

Kineteq.ai

@ScienceOrMyth

Building in progress….. @ https://t.co/0MrCB7UWSR Data scientist AI and other things at https://t.co/MxLqksvKxk

参加日 Nisan 2015
81 フォロー中241 フォロワー
Aryan
Aryan@justbyte_·
What should I add to this setup??
Aryan tweet media
English
43
4
98
3.7K
Paul Mit
Paul Mit@pmitu·
Startup is not a sprint, it's a marathon.
English
124
45
444
9.4K
Kineteq.ai がリツイート
Chubby♨️
Chubby♨️@kimmonismus·
Jerome Powell: “There is zero net job creation in the private sector." It’s gonna be a tough year.
English
27
27
395
32.5K
Kineteq.ai がリツイート
Kanika
Kanika@KanikaBK·
🤯 I just ended up reading this RESEARCH PAPER. THIS MADE ME UNCOMFORTABLE. KIMI TEAM (affiliated with Moonshot AI) just discovered that every major AI model has been silently forgetting its own thoughts. And they proved that a 10-year-old design flaw has been crippling every LLM ever built. Here is what they found. 36 researchers at Moonshot AI investigated how information flows through the layers of large language models. Every modern AI ChatGPT, Claude, Gemini - uses something called residual connections. These are the internal wiring that carries information from one layer to the next. The problem: this wiring treats every layer equally. It blindly stacks every piece of information on top of every other piece with the same fixed weight. As the model gets deeper, earlier insights get buried under noise. By the time the AI reaches its final layers, the critical early thinking that shaped its understanding is effectively gone. The researchers found that so much early information gets lost that significant chunks of a model's earliest layers can be completely removed from a trained AI with barely any impact. Those layers did real work during training. The model just can't access it anymore. It gets worse. This isn't a minor inefficiency. It's been hiding inside every transformer-based AI for a decade. The fundamental design of how AI carries information through its own layers hasn't changed since 2015. So the Kimi team built the fix. They replaced the rigid, fixed wiring with something that lets each layer dynamically choose which earlier thoughts to pay attention to. Instead of blindly stacking everything, the AI now queries its own past layers and selectively retrieves only what matters. They called it Attention Residuals. And the results are not subtle. They integrated it into a 48 billion parameter model and trained it on 1.4 trillion tokens. It improved on every single benchmark tested. Reasoning jumped 7.5 points. Math improved by 3.6 points. Coding ability gained 3.1 points. Not on cherry-picked tasks. On every evaluation they ran. Here's the trap nobody saw coming. When they gave the AI this ability to selectively retrieve its own past thoughts, the optimal shape of an AI model changed entirely. Standard models work best when they're wide and shallow. With this fix, the ideal architecture shifted to deep and narrow. The AI's future isn't bigger brains. It's deeper ones. The overhead? Less than 2% at inference. Less than 4% during training. A decade-old bottleneck fixed with negligible cost. Every AI you use today - every chatbot, every coding assistant, every reasoning model - is running on wiring that forces it to forget what it learned three layers ago. The fix exists. It works on every benchmark. It costs almost nothing. And not a single major AI company has shipped it yet. Why do you think?
Kanika tweet media
English
14
41
138
9.3K
Rohan Paul
Rohan Paul@rohanpaul_ai·
Chamath on how AI agents are making the "10x engineer" distinction disappear because the most efficient "code paths" are now obvious to everyone. Just as AI solved chess and removed the mystery of the best move, AI is doing the same for coding, making the process reductive and removing technical differentiation. "I'm going to say something controversial: I don't think developers anymore have good judgment. Developers get to the answer, or they don't get to the answer, and that's what agents have done. The 10x engineer used to have better judgment than the 1x engineer, but by making everybody a 10x engineer, you're taking judgment away. You're taking code paths that are now obvious and making them available to everybody. It's effectively like what happened in chess: an AI created a solver so everybody understood the most efficient path in every single spot to do the most EV-positive (expected value positive) thing. Coding is very similar in that way; you can reduce it and view it very reductively, so there is no differentiation in code." --- From @theallinpod YT channel (link in comment)
English
181
80
763
347.6K
Bojan Tunguz
Bojan Tunguz@tunguz·
Neural nets are an overkill. Especially for inference.
English
5
0
25
3.2K
Kineteq.ai がリツイート
Danny Limanseta
Danny Limanseta@DannyLimanseta·
I don't care if people call it AI slop. Vibe coding games is fun. It's become my main hobby now, and no one can take that away from me.
English
161
25
795
33.7K
Kineteq.ai
Kineteq.ai@ScienceOrMyth·
@edandersen How many tokens is 250k dollars? What are you even building?
English
0
0
0
54
Ed Andersen
Ed Andersen@edandersen·
Software engineers will not be trusted to spend 50% of their salary on variable opex costs with no guarantee of productivity, unless they are executive level. this is a pipe dream to sell GPUs
TFTC@TFTC21

Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"

English
36
34
633
23.2K
Kharis
Kharis@kharis_micheal·
Pitch me your best advice in 2 words
English
426
161
641
46K
Kineteq.ai
Kineteq.ai@ScienceOrMyth·
@BetterCallMedhi We will the monopoly game unfold where everyone will race to collect all the ai pieces.
English
0
0
0
13
Mehdi (e/λ)
Mehdi (e/λ)@BetterCallMedhi·
the era of software eating the world is over, we just entered the era of software becoming the world, designing molecules from scratch, engineering materials algorithmically, running factories with 0 humans & millisecond precision, rewriting biological systems from first principles… every massive fortune of the next 30y will be built at the intersection of bits and atoms, the guys who understand code AND physics AND chemistry AND manufacturing AND energy will own everything pure software is a commodity now, the real moat is turning intelligence into physical reality at industrial scale
The Wall Street Journal@WSJ

Breaking: Jeff Bezos is in talks to raise $100 billion for a new fund that would buy manufacturing companies and use AI to automate them wsj.com/tech/jeff-bezo…

English
17
34
220
16.7K
Kineteq.ai がリツイート
Santiago
Santiago@svpino·
No, I don't think AI should be thanked, credited for its work, celebrated, chastised, or treated as anything other than a tool. Should we start crediting Visual Studio Code on every commit? Should we also credit Python? How about crediting Apple for their computers, which made that particular commit possible?
Josh Ellithorpe@zquestz

@svpino So you don't believe an AI should be credited for their work? What about in an age when AGI exists?

English
24
4
92
8.2K
Kineteq.ai がリツイート
Josh Kale
Josh Kale@JoshKale·
This is straight out of Black Mirror... DoorDash's new app pays delivery drivers to strap on body cameras and film themselves doing household chores to train AI robots The tasks: - Wash five dishes on camera, holding each up to the lens - Film yourself folding laundry - Record an unscripted conversation in Spanish - Walk a grocery aisle filming every shelf - A few bucks per clip DoorDash feeds this into AI and robotics models, and sells the data to partners across tech, retail, and hospitality. They have 8 million drivers across nearly every zip code in America. It's a real world data collection machine no AI lab could replicate. Meanwhile, DoorDash is actively deploying autonomous delivery robots in Arizona. Partnered with Waymo for driverless deliveries in Phoenix. Signed a deal with Serve Robotics for sidewalk bots in LA. Committed to commercializing autonomous delivery this year. Uber and Instacart are running the same playbook. Voice recordings. Photo uploads. Wrist mounted cameras capturing every hand movement while workers cook dinner. The entire gig industry is converting its workforce into AI training data. Funny side note: DoorDash also pays drivers $11 to close Waymo car doors the robot can't close itself The most valuable new gig might just be showing a machine how to do yours. Wild times
Josh Kale tweet media
Andy Fang@andyfang

Introducing Dasher Tasks Dashers can now get paid to do general tasks. We think this will be huge for building the frontier of physical intelligence. Look forward to seeing where this goes!

English
15
10
97
25.8K
Kineteq.ai
Kineteq.ai@ScienceOrMyth·
@SynBio1 the space where all viable and valid theories exist is pretty small.
English
0
0
0
30
Kineteq.ai
Kineteq.ai@ScienceOrMyth·
@davidsirota It was a different era then. We are always nostalgic for the past. It is true though the economy was much better back then.
English
0
0
0
45
David Sirota
David Sirota@davidsirota·
I don't idealize the past, but I think people were happier before there were supercomputers in everyone's pockets whose every social media buzz/notification is a cortisol-prompting reminder that the world is ending & that everyone else is more successful & living a better life.
English
30
73
484
8.8K
Kineteq.ai
Kineteq.ai@ScienceOrMyth·
@r0ck3t23 There are a lot of grifters and a lot of psychosis going around. ASI will be software. It is not alive nor can it ever be. Don't let the Wizards of Oz fool you.
English
0
0
0
62
Dustin
Dustin@r0ck3t23·
Jensen Huang just told every AI leader in the room to grow up. Stop scaring the public with science fiction. Start communicating like the weight of civilization is on your shoulders. Because it is. Huang: “AI is not a biological being. It is not alien. It is not conscious. It is computer software.” That single statement dismantles half the panic surrounding this industry. The mainstream conversation is dominated by people projecting human malice onto math. Alien consciousness onto code. Existential dread onto a software architecture we built, we trained, and we can read. Huang: “We say things like, ‘We don’t understand it at all.’ It is not true. We understand a lot of things about this technology.” When builders tell the public they don’t understand their own creation, the public hears threat. The state responds with control. That is already happening. Palihapitiya asked Huang what he would have told Anthropic during their regulatory clash with the Department of Defense. Huang didn’t attack the technology. He attacked the communication. Huang: “The desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum, and that warning is good, scaring is less good because this technology is too important to us.” Warning shows risks, mitigation, why upside overwhelms downside. Scaring says we might be building something that destroys us and we can’t stop it. One builds trust. The other invites regulation written in panic. Huang: “To say things that are quite extreme, quite catastrophic, that there’s no evidence of it happening, could be more damaging than people think.” Projecting catastrophe without evidence is not caution. It is sabotage. When your technology is embedded in national defense, the financial system, and healthcare infrastructure, your words carry structural weight. If the architects act terrified of their own product, the response is predictable. Governments step in. They restrict. They seize control of something they don’t understand because the builders told them to be afraid. Huang: “There was a time when nobody listened to us, but now because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter.” Most tech founders have not internalized this. You are no longer a startup founder disrupting an industry. You are running infrastructure that nations depend on. Your statements move policy. Your framing shapes legislation. Your tone determines whether governments treat you as partner or threat. Huang: “We have to be much more circumspect, we have to be more moderate, we have to be more balanced, we have to be far more thoughtful.” Huang did not ask for silence. He asked for precision. The leaders who cannot tell the difference will not be leading for long.
English
91
72
363
34K
Sergey Karayev
Sergey Karayev@sergeykarayev·
Running agents locally is a dead end. The future of software development is hundreds of agents running at all times of the day — in response to bug alerts, emails, Slack messages, meetings, and because they were launched by other agents. The only sane way to support this is with cloud containers. Local agents hit a wall quickly: • No scale. You can only run as many agents (and copies of your app) as your hardware allows. • No isolation. Local agents share your filesystem, network, and credentials. One rogue agent can affect everything else. • No team visibility. Teammates can't see what your agents are doing, review their work, or interact with them. • No always-on capability. Agents can't respond to signals (alerts, messages, other agents) when your machine is off or asleep. Cloud agents solve all of these problems. Each agent runs in its own isolated container with its own environment, and they can run 24/7 without depending on any single machine. This year, every software company will have to make the transition from work happening on developer's local machines from 9am-6pm to work happening in the cloud 24/7 -- or get left behind by companies who do.
English
56
13
159
13.2K
Mgoes (bio/acc 🤖💉)
Mgoes (bio/acc 🤖💉)@m_goes_distance·
careers left in the singularity: - niche podcaster - sex machine operator - farmer - philosopher - time machine operator - human verification specialist - blood boy - trad wife - claude operator - cult leader - shitpoaster - peptide vendor - lover/warrior/magician - biohacker storyteller - monk - personality designer - afterlife curator - sec of state what else?
English
47
19
179
10.4K
Kineteq.ai がリツイート
Elon Musk
Elon Musk@elonmusk·
Tesla Semi is super fun to drive
Sawyer Merritt@SawyerMerritt

WSJ: Tesla Finally Has Its First Semi-Truck and It’s Already a Hit With Truckers. "Truckers who drove it in pilot tests say they loved features including a centered driving position, faster charging and longer range for about $100,000 less than other battery-electric trucks. Angel Rodriguez, a 56-year-old truck driver for Hight Logistics in Long Beach, Calif., recently swapped out a 13-gear diesel truck for a Tesla Semi, which is automatic, for a one-month pilot test. “It’s just easier on your body. It’s less stressful because you’re not really having to engage the clutch and the stick shift.” Big F Transport employs five mechanics to service more than 40 diesel-powered rigs and a fleet of trailer chassis in Wilmington, Calif. “If we go all EV we will only need one [mechanic] to service chassis,” said Geovanny Melendez, the carrier’s VP of operations, who went to see the Semi earlier this month at a ride-and-drive event near the Port of Long Beach. Jennie Abarca, co-founder and CEO of King Fio Trucking in Long Beach, Calif., once worked as a truck dispatcher and her husband is a truck driver, so she knows all too well the toll a diesel engine takes on people’s lungs and hearing. She eventually wants to swap out King Fio’s 27 diesel trucks to create an all-electric fleet. King Fio already has 11 battery-electric trucks from Volvo and Nikola. But the company limits those trucks to shorter trips to and from local ports because they only have a range of about 225 miles. The Semi, by contrast, can travel 500 miles on a single charge, according to Tesla. For King Fio that means two or three round-trips a day from Long Beach to warehouses in the nearby Inland Empire or a single round-trip to Las Vegas. She has 20 Semis on order. “The Teslas change everything,” Abarca said. “It opens up a whole different type of delivery that I can make.”

English
1.9K
2.3K
18.8K
7.2M
Matt Wolfe
Matt Wolfe@mreflow·
Anyone else find that they're starting to talk more and more like an LLM? I feel like I've spent so much time working with them that I take on more and more of the vocabulary that's been output by them. I'll write X posts and think "people will think I had ChatGPT write this." I'll say things like "That's directionally correct but misses some key points," and then think, "holy shit, that's how ChatGPT would have worded it."
English
107
3
119
6.4K
lyv ⌘
lyv ⌘@wholyv·
unpopular opinion: OpenClaw is just pure hype. it’s nowhere as good as these Xfluencers tell you.
English
109
12
566
20.7K