jean van id

1.3K posts

jean van id banner
jean van id

jean van id

@Shankspranks

🐈🐒🐧🐤🦔🦧🐿️🐃🐆🦈🐡🐋🐝

Cape Town, South Africa Katılım Temmuz 2009
464 Takip Edilen93 Takipçiler
jean van id
jean van id@Shankspranks·
@chamath You get your model to summarize the session then you chunk, embedd and load to a database (Azure Sql, Cosmos or Postgresql) as Vector data type. You then add a DiskANN index and for each new session you can pull the last x records and also the top 20 similar records.
English
0
0
0
24
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.
English
1.1K
59
2.4K
797.9K
Aryan
Aryan@justbyte_·
Bro seriously, what is difference between this??
Aryan tweet media
English
56
10
497
74.8K
jean van id
jean van id@Shankspranks·
@kareem_carr 🤣 No we now are forced to multi task and hand hold to ensure design patterns are adhered to and that our little buddy doesn't do ridiculously hacky patches. We are now the tech debt police. But yes I will drink coffee while I watch and do manual approvals so I can monitor close
English
0
0
0
45
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
I keep hearing that software engineers don’t write much code anymore and it’s mostly AI now. Can any software engineers confirm how true this is? Do you just drink coffee and watch Claude code all day now?
English
535
12
580
171.8K
Anton Zhiyanov
Anton Zhiyanov@ohmypy·
I couldn't care less about Claude Code's source being leaked on npm. What terrifies me is that it's 512,000 lines of TypeScript code. HALF A MILLION lines of code for what's essentially a glorified API wrapper. I think the crucial point in our reality when we took the wrong turn was the invention of JavaScript. And we cemented our path to doom with the invention of TypeScript. Half a million lines of code. Dear Lord, have mercy on us.
English
318
174
3.5K
407.8K
jean van id retweetledi
👅 Goofin 👅
👅 Goofin 👅@GoofingOffToday·
Who's winning 🤣
English
423
206
7K
1.3M
jean van id retweetledi
theburntpeanut
theburntpeanut@theburntpeanut·
BUNGULATORS!!! We are running a 2 PC giveaway. 2x RTX 5070 STARFORGE Gaming PC Bundles. To Enter, perform these Task via the Link Below: 🔁Repost + Like ✅Follow All Official Burnt Peanut Channels vast.link/peanut
theburntpeanut tweet media
English
5.1K
19.1K
25.5K
992.7K
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
Our cracked team just used Software Factory to rebuild and replace Jira in a little more than a month. We first spent 3.5 weeks planning. This is Software Factory’s superpower. It allowed our lead PM, Designer and Architect to thoughtfully describe and detail exactly what they wanted. Software Factory then did the heavy lifting in filling in the blanks and allowing our senior tech folks to sharpen the direction of what they wanted. Then in 2.5 weeks 2.5 junior devs built a replacement. This will launch as an updated Planner module inside of Software Factory on Tuesday. It’s beautiful, clean and super useful. Try it here: 8090.ai
English
262
83
1.4K
1.1M
Ayaan 🐧
Ayaan 🐧@twtayaan·
🚨 AWS just killed the most annoying thing about S3. For years, bucket names had to be globally unique. If someone on the other side of the world took "prod-logs"... You were stuck naming yours "prod-logs-company-us-east-1-xyz". That ends today. AWS just introduced Account Regional Namespaces. Now, your bucket name only needs to be unique within your specific account and region. You can finally just name your buckets: - logs - images - backups No global naming conflicts. Clean code. Predictable infrastructure. You can enable it right now in the console or via API across 37 regions. But here’s the catch. Your existing buckets are stuck on the old global model. They cannot be renamed.
Ayaan 🐧 tweet media
English
49
74
964
157.9K
jean van id retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
You can’t buy Linux. It’s free. Always has been. So IBM did the next best thing: it spent $34 billion buying Red Hat, a company whose entire business is selling tech support for this free software. Largest software acquisition in history. For support contracts on something anyone can download for $0. The “side project” story, while true, is maybe 5% of what actually happened since. Linux itself is managed by a nonprofit, the Linux Foundation. That nonprofit pulled in $311 million last year. Only $8.4 million of that (2.6%) actually went to Linux itself. The rest of the funds support ~1,500 other open source projects, events, and training. Every Fortune 100 tech company is a paying member. And here’s who actually builds this “free” software now: 84% of the code changes to Linux in 2025 come from developers on corporate payroll. Intel is the biggest contributor. Google is second. Huawei, Oracle, AMD, and Meta all have engineers writing Linux code full-time. Over 1,780 companies pay people to work on it. The solo genius in a dorm room stopped being the real story around 1998. The wildest part: over 65% of Microsoft’s cloud computers run Linux. Microsoft, the company whose former CEO once called Linux “a cancer,” now runs more Linux than Windows on its own servers. Amazon and Google’s clouds are even higher, both above 90%. A 2024 Harvard Business School study attempted to calculate how much companies would spend if all free, open-source software vanished tomorrow. The answer: $8.8 trillion more per year. 3.5x what they currently spend. And that number didn’t even include operating systems like Linux. Linus Torvalds still personally approves every major code change. He makes about $1.5 million a year. He also built Git, the tool that powers GitHub (which Microsoft bought for $7.5 billion). Two pieces of software the entire tech industry runs on, same guy. Linux started as 10,239 lines of code. It’s now over 40 million. Every one of the world’s 500 fastest supercomputers runs it. 96% of the top million websites sit on it. Every Android phone has Linux inside it. That’s roughly 3 billion devices in people’s pockets. It’s the largest collaborative engineering project in human history, free to use, funded by the same corporations it was supposed to replace.
Sahil@sahill_og

Linus Torvalds created Linux at 21 without Claude or any other AI. - He didn't have a co-founder. - No VC funding. No office. - No team. - Just a personal project he posted to a mailing list: "I'm doing a free OS." 33 years later, it runs 97% of the world's servers, all smartphones, and the International Space Station. The most important software in history started as someone's side project. Absolute legend.

English
86
1.1K
7.9K
808.6K
Kritika
Kritika@kritikakodes·
Can I ask a dumb dev question… Why do developers prefer VS Code over full IDEs?
English
193
11
509
153.5K
jean van id retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
881
2.6K
15.2K
2.2M
jean van id
jean van id@Shankspranks·
@svpino Created an MCP server and AI Agent to add numbers to a list 🤣🤣🤣🤣
English
0
0
0
28
Santiago
Santiago@svpino·
I have an agent taking every new Stripe payment and inserting the information into a spreadsheet. It uses an MCP server to access the spreadsheet and determine the first empty row to write the new payment. Everything was working fine until this morning. When checking the contents of a spreadsheet column, the MCP server returns something like this: [$1000, $1500, $1200, null, null, $100, $235, , , , , , ,] Notice the null and the empty values (I'm not sure why there is a difference, but in the spreadsheet, both represent empty cells). Today, my agent decided that `null` cells meant they weren't empty (they are). Here is the funny part: When the agent didn't find any empty cells within the hardcoded range I gave it to write new values, it expanded its search to the next set of rows. I hardcoded: "Look for empty values within B2:B100". The agent didn't find empty values, so it asked the MCP to return any cells that weren't null between B100:B200. Two huge problems: 1. It violated my requirements. 2. It made a mistake in the filter (non-null cells include cells with data!) The agent overwrote existing data in the spreadsheet. I realize this happened because I don't trust it. I didn't lose any data because Google Spreadsheets keeps a history of every change. This is where we are right now.
English
63
18
206
39.3K
Gingy
Gingy@RNGingy·
Goodbye pink hair 🥹
Gingy tweet media
English
467
81
16.5K
414.6K
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
In a democracy, it’s absolutely ok to define who can use the things you make and how. But it’s also absolutely ok for the Government to lose trust in you, tell you to fuck off and find an alternative. It’s also absolutely ok for you to nuke your own company in the process. The timing of this is not good for Anthropic and could be a potential boon to every other model that is exceeding expectations in their upcoming version (Grok, OAI, Gemini). More generally, I don’t see how this isn’t a slippery slope. What if a model maker updates their ToS that would block a use case that is legal but subjective? Agreeable in some states but not in others? What about in different countries with different governance or religions? It’s a huge can of worms. How can a government or company rely on a model that could have an ever-changing definition of what’s allowed without taking on major business/governance risk? They won’t. My hunch is that the company that embraces the “no holds barred” ToS will win because it’s the least risky to adopt wrt long term risk of getting rug-pulled.
Chamath Palihapitiya tweet media
English
423
145
2K
272.2K
ARC Raiders Informer
ARC Raiders Informer@ArcRaidersInfo·
@CrazyLegz__ @XTNKTWS6 We didn’t say the image is old, we said some believe it is. Regardless we’re hearing of players being banned, whether the image is old or not.
English
6
0
11
4.8K
ARC Raiders Informer
ARC Raiders Informer@ArcRaidersInfo·
Players are receiving temporary BANS in Arc Raiders for participating in the following glitches: • Duplication glitch • Infinite ammo glitch • Infinite repair weapon glitch Are you safe? 😳
ARC Raiders Informer tweet media
English
316
97
3.1K
400.5K
jean van id retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I'm the VP of AI at Apple. I've been here since 2011. I watched Siri launch. It was revolutionary. For about six months. Then Google Assistant came out. Then Alexa. Then ChatGPT. We kept saying we were "focused on privacy." Privacy is what you say when you're losing. Three years ago the board asked about our AI strategy. I showed them a slide that said "On-Device Intelligence." They nodded. They didn't know what it meant. Neither did I. But it had a picture of a neural network. Neural networks look impressive. Even when they don't work. Last year someone asked Siri to set a timer. It opened a Wikipedia article about timers. Tim saw the meme. He didn't laugh. He scheduled a "strategic offsite." Offsites are where we go to admit failure privately. I presented three options. Option 1: Build our own LLM. That would take four years. We don't have four years. Option 2: Buy a startup. We looked at twelve. They all wanted $40 billion. For teams of nine people. Who would leave after the acquisition. Option 3: Call Google. The room went quiet. Google is the enemy. We've spent fifteen years pretending we're better than Google. Our entire brand is "not Google." But Google has TPUs. We don't. Google has Gemini. We have Siri. Siri still can't reliably add items to a grocery list. I called Sundar. He picked up on the first ring. He'd been waiting. They all wait. Eventually everyone calls Google. I asked for TPU access. He said yes. I asked for Gemini integration. He said yes. I asked how much. He said one billion dollars. I said that's a lot. He said "per year." I paused. He said "you don't really have a choice." He was smiling. I could hear it. We announced it as a "strategic partnership." Partnership means we're paying them. The press release said we're "enhancing Siri's capabilities." Enhancing means replacing. We said the new Siri arrives "late 2026." Late 2026 means 2027. Maybe 2028. Definitely not 2026. A reporter asked if this means Apple lost the AI race. Our comms team said we're "thoughtfully deliberate." That's not an answer. But it has enough syllables to sound like one. Internally, we're calling it "Project Humble Pie." Someone suggested "Project Brain Transplant." HR flagged that as "not brand-aligned." The engineers are relieved. They've been trying to make Siri work for years. Now they can blame Google. Blame is a renewable resource. Tim did a podcast. He said AI is "a profound technology." He's never used ChatGPT. I showed him once. He asked why it was typing so slowly. I said that's how it works. He said "Siri should be faster." I said "Siri will be Google." He said "don't say that publicly." I won't. Publicly, we're "leveraging industry partnerships." Leveraging means surrendering. But with dignity. We still have the best hardware. We still have the ecosystem. We still have the brand. We just don't have AI. So we're renting it. From the company we've mocked for two decades. The one billion dollars is a licensing fee. The real cost is the narrative. We were the innovators. Now we're the integrators. But the stock is up 3%. Wall Street doesn't care about innovation. Wall Street cares about not falling behind. We're not falling behind anymore. We're being carried. By Google. For one billion dollars a year. I'll present this as a win at the next all-hands. Wins are whatever you frame them as. The graph will go up and to the right. It always does. As long as you pick the right metric.
Peter Girnus 🦅 tweet media
English
190
250
1.8K
229.5K
Speranza Intel
Speranza Intel@SperanzaIntel·
🚨 Wolfpack Blueprint Giveaway ALERT! 🎁 The giveaway is ending in an hour! Retweet to enter the giveaway!
Speranza Intel tweet media
English
37
229
243
37.3K
jean van id retweetledi
blue
blue@bluewmist·
i have no desire to be rich so i can buy a rolex or a lamborghini. i want to be rich so i can control my time and go to the gym at 3pm on a monday. sit at a cafe and relax for an hour on a rainy afternoon. so i can cook meals at home with fresh ingredients. spend on my family and friends without worrying about a budget. that's my idea of a rich life, not the fake consumerist idea shoved down my throat.
English
3K
34.1K
208.9K
4.2M
jean van id retweetledi
maro
maro@ProofofMaro·
I feel like if programming languages had ‘maybe’ and ‘both’ instead of ‘if’ and ‘then’ it would solve a lot of our quantum computing problems
English
14
7
65
5.7K
jean van id retweetledi
maro
maro@ProofofMaro·
When I was still a student at the conservatory, my professors used to call me ‘Beethoven Girl’. Not just because I was the best Beethoven player they had, you see I had an unhealthy obsession with trying to get these notes to resonate at Beethoven’s truest intent. I read what he read. I ate what he ate. I engulfed myself in Voltaire and Kant to breathe the same air of Enlightenment he breathed. To experience the frequencies he could not hear but were realer to him than his own reality. I learned from his students directly — Wilhelm Kempff and John O’Conor. When I realized every piece of sheet music was altered by production for copyright purposes, I flew to Europe to get my hands on the First Edition of Beethoven Sonatas from Budapest. I hand painted this specific third movement on the walls of my dorm. I graduated the conservatory with this as my final performance and went on to use it for my audition piece to get accepted as a fine arts major in university. The piece in its entirety is 1 hour long. Here is Beethoven’s Sonata No 17 in D minor, the third movement—Tempest—performed to the enlightenment with which he originally intended. Yours truly, Beethoven Girl
English
1.1K
2.3K
20.4K
545.8K