luminousmen

736 posts

luminousmen banner
luminousmen

luminousmen

@luminousmen

Senior Data Engineer @Spotify | Blogging at https://t.co/ZrcSE7WJOu about Data Engineering, Python, ML | Author of Grokking Concurrency

Katılım Kasım 2017
91 Takip Edilen525 Takipçiler
Sabitlenmiş Tweet
luminousmen
luminousmen@luminousmen·
Hello wonderful people! I’m thrilled to announce that my new book, “Grokking Concurrency,” has officially hit the shelves! You can find it here: manning.com/books/grokking… I could really use your support in spreading the word!
luminousmen tweet media
English
6
28
107
14.9K
luminousmen
luminousmen@luminousmen·
Spark is powerful. It scales. It's fast out of the box. And yeah, the defaults are surprisingly decent - until your dataset grows, your joins get messy, or you start mixing Scala with PySpark and Arrow and some eager ML engineer starts throwing 200MB Pandas UDFs at the cluster.
English
1
0
1
61
luminousmen
luminousmen@luminousmen·
Sounds about right
luminousmen tweet media
English
1
1
1
80
luminousmen
luminousmen@luminousmen·
ONLYFANS could be the most revenue-efficient company on the planet, beating Nvidia, Meta, Tesla, and Amazon - powered by ass, not AI.
luminousmen tweet media
English
0
0
0
96
luminousmen
luminousmen@luminousmen·
If you're experimenting with Antigravity or any similar agent-driven development tools, keep the following in mind: - Lock down access to secrets - Audit what capabilities your agents actually have - Treat AI agents like remote developers
English
1
0
0
48
luminousmen
luminousmen@luminousmen·
Security researchers at PromptArmor have discovered a critical vulnerability in Google Antigravity - Google's new AI-powered IDE that uses Gemini-based agents.
English
1
0
0
53
luminousmen
luminousmen@luminousmen·
So when you cluster/partition well, you're not "making queries faster". You're giving the engine permission to do nothing on most of your data.
English
1
0
0
25
luminousmen
luminousmen@luminousmen·
Most people treat BigQuery like a magic SQL endpoint. You write a query, hit Run, wait a few seconds... and a petabyte-sized answer pops out. If it's slow or expensive, the default reaction is: "I need more compute". That's backwards. #dataengineering
English
1
0
0
38
luminousmen
luminousmen@luminousmen·
So an absolute "never" feels more like a stance than an argument. Yes - in their current form, LLMs aren't intelligent. But shutting down the conversation with a definitive "never" feels shortsighted at best. What do you guys think? 🔗 Link: futurism.com/artificial-int…
English
1
0
0
31
luminousmen
luminousmen@luminousmen·
The real question is whether that counts as intelligence, or just a very sophisticated simulation of it. From a technical perspective, the architecture can evolve: LLM+world model+memory + planning could realistically form the foundation for something more genuinely intelligent.
English
1
0
0
26
luminousmen
luminousmen@luminousmen·
The article claims that LLMs will never become "intelligent".
English
1
0
0
27