Basic SWE

66 posts

Basic SWE

Basic SWE

@basicswe

Software engineer • in the arena • 10+ years #FAANG • Views are my own

加入时间 Şubat 2022
935 关注618 粉丝
置顶推文
Basic SWE
Basic SWE@basicswe·
I often get asked how's day to day life in GAMMA as a Software Engineer, this 2007 paper from Microsoft is an unusual but fantastic source: microsoft.com/en-us/research… It is 15 years old, but little has changed compared to what's described in it. 1 particular faulty process: 1/3 👇
Basic SWE tweet media
English
0
0
5
0
First Squawk
First Squawk@FirstSquawk·
ANTHROPIC SHIFTS TO USAGE-BASED BILLING, INCREASING COSTS FOR HEAVY USERS - TIF
English
210
158
2.7K
1.5M
Basic SWE
Basic SWE@basicswe·
@KentonVarda @magnusjason That would be awesome. If you've played WoW long enough, you can probably recognize which area you're in with a single screenshot... But navigating that area would force you to be good at orienting yourself (especially thick forests where you don't see far away known landmarks)
English
0
0
0
22
Basic SWE
Basic SWE@basicswe·
@Samward @karpathy ... Now when wife (or tbh myself when I forget) is looking for an item, we just ask the agent where is it. Or "where should I file this medical bill?").
English
0
0
1
15
Basic SWE
Basic SWE@basicswe·
@Samward @karpathy Files-over-app all the way. I might have taken it a bit too far: I've made a directory tree representing items stored in the house ("./office/blue drawer/top shelf/red folder/..."). I then run a "tree -f" at the root, and dumped that output into an agent skill...
English
2
0
1
53
Andrej Karpathy
Andrej Karpathy@karpathy·
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable. 2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information. 3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy. 4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data. So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :) Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
438
803
8.8K
1.3M
Basic SWE
Basic SWE@basicswe·
@bpodgursky You're confusing state and more local regional infrastructure. *This* is what the state said they'll build back in 1992: #47.79.010" target="_blank" rel="nofollow noopener">app.leg.wa.gov/rcw/default.as… High speed trains: Seattle <-> Portland by 2010 Everett <-> Vancouver BC by 2025 Seattle <-> Spokane by 2030 Where is any of that?
English
0
0
1
203
Ben Podgursky
Ben Podgursky@bpodgursky·
I'm all for making fun of blue-state infrastructure failures, and you can quibble over the price, but Seattle actually built a light rail network in less-than-geologic time. Better to make fun of the NY, CA systems that spent gargantuan money and got literally nothing for it.
PNW Conservative@PNWConservative

Look at that mostly empty $3,800,000,000 light rail train! Now look at that $500,000 bus overtaking it and having more flexible options for riders. Oh well, it’s just taxpayer money. Doesn’t need to be spent responsibly.

English
36
14
579
53K
Basic SWE
Basic SWE@basicswe·
@a_patil @QuinnyPig @zuhayeer @GeoffreyHuntley Microsoft turned empire building into a full MMORPG. Levels go all the way to 80. (You spawn at level 59 so you're not wasting time in the starter zone going from 1 to 59. The middle management grind is where the fun is at!)
English
0
0
1
176
Zuhayeer Musa
Zuhayeer Musa@zuhayeer·
The quiet restructure happening at companies isn’t just layoffs, but org compression. Companies are starting to delete levels from their ladders, and increase scope overall across the board.
Zuhayeer Musa tweet media
English
7
9
150
20.6K
Basic SWE
Basic SWE@basicswe·
@Percura_ai @steren Same for the lite_llm supply chain attack: if all there is is a snapshot of the source code of the lite_llm GitHub repo at a specific version, and it's built from that source, then there is no risk through pypi
English
1
0
3
143
Steren
Steren@steren·
When I joined Google, I found it annoying that: 1. Everyone works in the same repo at head 2. All dependencies are explicitly declared 3. External dependencies are copied in a central third_party folder 4. Everything can be re-built from source I had changed my mind for all of these points after a year.
English
53
84
2.9K
447.2K
Basic SWE
Basic SWE@basicswe·
@karpathy You got the argument and the steelmanned coutner-argument. You're missing the 3rd and last step: LLM "debate judge". “Analyze the 2 arguments above side-by-side. outline how strong and sound each argument is. Expose any logical fallacies seen in each argument.”
English
1
0
0
43
Andrej Karpathy
Andrej Karpathy@karpathy·
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
1.7K
2.4K
31.2K
3.4M
Basic SWE
Basic SWE@basicswe·
@n3rdyhick Is there an easy way to export the data as one big json dump? I'm interested in compiling stats per-representative yay/nay on each of the common sense amendments to see who's more reasonable. ("Delay until NBA returns" for instance is not a serious amendment imo)
English
1
0
0
45
Basic SWE
Basic SWE@basicswe·
@stevenedds @mitchellh The main use case isn't pane splitting, it's persistent tabs. So you ssh in a server, open many tabs from there. Then next day you ssh in that server again. from a totally different machine if you want, and all your tabs are still there (and still running whatever command)
English
0
0
1
129
Steven Edds
Steven Edds@stevenedds·
@mitchellh What’s the use case for tmux in a terminal that already supports pane splitting?
English
2
0
0
1.4K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
There's such a deep misunderstanding out there about tmux and I get so many absurd issue reports demonstrating that. Many don't realize that using them is like running a Windows VM on your Mac, and complaining to Apple that iCloud sync isn't working from Windows in the VM. They are super powerful and have their use and I am happy to support them in any way I can. I'm not anti-multiplexer, but I wish more people understood the architecture a bit more.
English
80
17
1.1K
177.7K
Basic SWE
Basic SWE@basicswe·
@BradAlbert_01 @GergelyOrosz Look at some of its followers. All bot accounts with sub-20 followers themselves. They all re-tweet similarly generic pages like @ TechLayoffLover with ~20k followers> Gigantic bot farm
English
2
0
0
36
Brad Albert
Brad Albert@BradAlbert_01·
@GergelyOrosz @TechLayoffLover How does a fake acct get 19k followers in a single month? Genuinely curious as I'm always looks for filtering signals like this ie non-us accts (mostly)
English
1
0
0
936
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
AMAZON PRIME VIDEO BLOODBATH 2,847 employees got the email at 6:47 AM PST "Your role has been eliminated effective immediately" Badges dead by 7:15 AM. Slack access revoked mid-sentence Senior engineers who built the entire streaming infrastructure. Gone The team that shipped 40% faster last quarter using Claude for code generation. Eliminated 847 contractors in Bangalore just got handed their prompt libraries and deployment scripts Same streaming platform. Same feature velocity expected 14 remaining Seattle engineers to "manage AI-augmented offshore delivery" The kicker: those eliminated seniors spent 8 months documenting every architectural decision into internal wikis Every code pattern. Every debugging workflow. Every performance optimization trick That documentation just became training data for the AI systems replacing them VP of Engineering sent company-wide: "This transition represents our commitment to AI-first development" Severance packages include mandatory 90-day non-compete clauses Meanwhile the Bangalore team already pushed 12 commits using the extracted knowledge base One former L7 told me: "I literally trained the AI that made me redundant" If you're at FAANG and not seeing this coming you're already dead DMs open for anyone who needs to talk
English
1.3K
6K
25.7K
10.7M
Basic SWE
Basic SWE@basicswe·
Wow great summary. In addition to the traditional software engineer trade-offs of how much to divide & conquer, space/time trade-off etc, the non-deterministic vs deterministic knob is a new exciting problem space to explore and solve in the agent-building domain
Aaron Levie@levie

We’re starting to get a clearer sign of how vast the surface area of context engineering is going to be. To build AI agents, in theory, it should be as simple as having a super powerful model, giving it a set of tools, having a really good system prompt, and giving it access to data. Maybe at some point it really will be this simple. But in practice, to make agents that work today, you’re dealing with a delicate balance of what to give to the global agent vs. a subagent. What things to make agentic vs. just a deterministic tool call. How to handle the inherent limitations of the context window. You had to figure out how to retrieve the right data for the user’s task, and how much compute to throw at the problem. How to decide what to make fast, and suffer potential quality drops, vs. slow but maybe annoying. And endless other questions. So far there’s no one right answer for any of this, and there are meaningful tradeoffs for any given approach you take. And importantly, getting this right requires a deep understanding of the domain you’re solving the problem for. Handling this problem in AI coding is different from law, which is different from healthcare. This is why there’s so much opportunity for AI agent plays right now.

English
0
0
0
39
Basic SWE 已转推
Louie Bacaj
Louie Bacaj@LBacaj·
It’s clear now that the reason employers treated tech workers so well wasn’t out of the goodness of their hearts. But because tech talent had options. It’s impressive to see how fast so many employers are chucking any goodwill out the window these days.
English
41
53
908
52.3K
Basic SWE 已转推
tobi lutke
tobi lutke@tobi·
Sunday rant. For software engineering, my sense is that the phrase “premature optimization is the root of all evil” has massively backfired. Its from a book on data structures and mainly tried to dissuade people from prematurely write things in assembler. But the point was to free you up to think harder about the data structures to use, not leave things comically inefficient. This context is always skipped when it’s uttered. Not all fast software is world-class, but all world-class software is fast. Performance is _the_ killer feature. If you are in engineering, here is a fantastic anecdote. I refer to this account often. It’s a bit subtile, but the implications are massive- It’s an account of how SQLite became 50% faster, not by doing one specific thing but hundreds of small ones. SQLite is everywhere today because of this work. sqlite-users.sqlite.narkive.com/CVRvSKBs/50-fa… We need the engineers in all companies fight for this more. Product leads are not the right owners of the end performance of the software. This needs to be encoded in the professional pride of the software engineering discipline. Leaders in companies need to encourage it and hold engineering accountable. It’s simply not ok to fritter away the performance of the products for random reasons. Every user of your products cares exactly as much about latency as engineers do when typing in their terminal. They just don’t have the words to describe what they don’t like about the experience and neither should they.
English
211
823
5.5K
1.2M