Daniel Chernenkov

292 posts

Daniel Chernenkov banner
Daniel Chernenkov

Daniel Chernenkov

@danielckv

📸 Tech Entrepreneur & Urban Explorer ✨ 2x Post Exists. 🧠 Staying Foolish, Building the Future.

California, United States Katılım Aralık 2013
48 Takip Edilen212 Takipçiler
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@suru27 The workflow integration piece is exactly what most people miss - tools are powerful but the real leverage is in how they connect How do you handle model hallucinations when working with financial data?
English
0
0
0
62
kumar saurabh
kumar saurabh@suru27·
AI isn't just a buzzword; it’s the ultimate efficiency tool for equity research. 🤖📊 On 28th March 2026, we are going live with SuperSession Part 03 on AI-Led Business Analysis. We will be mastering integrated workflows across NotebookLM, ChatGPT, and Gemini to distill 200-page financial reports into actionable insights in minutes. Registration is just ₹2,499 for the complete 5-session series. [ This entire series is absolutely FREE for all Practitioner 2.0 members! ] What you get: ✅ Immediate access to Part 1 & 2 foundational recordings. ✅ 1 full year of access from your registration date. ✅ The Complete Prompt Frameworks & Resources to distill insights in minutes
kumar saurabh tweet media
English
3
0
27
4.4K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@VaibhavSisinty The infrastructure shift is the real unlock here. DeerFlow's approach of giving agents their own execution environment is exactly where this needs to go - no more prompt-response loops, just outcomes.
English
0
0
0
31
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Hot take: The real AI shift isn’t just intelligence, it’s agency. ByteDance just dropped DeerFlow 2.0. Not another chatbot or wrapper This is an agent that executes. You give it a goal: → “Build me a research report with charts” → “Create a full-stack app” → “Generate a slide deck + visuals” It spins up a machine and gets to work. Here’s the shift most people are missing: We are moving from “AI as interface” → “AI as infrastructure.” DeerFlow doesn’t live in a chat window. It lives inside a sandboxed environment (with its own filesystem, terminal, dependencies) The interface era is ending. The execution era just began.
English
10
6
33
1.7K
Jaynit Makwana
Jaynit Makwana@JaynitMakwana·
GPT-5.4 → Best for logic & reasoning Claude 4.6 → Best for writing & memory Gemini 3 Pro → Best for research + images Veo3 → Best for cinematic AI video I just found a secret tool to use all of them in one dashboard, cheaper than a cup of coffee. Here’s how ↓
English
7
14
31
4.1K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@cyrilXBT lol the accuracy of this been using both and claude just hits different when you need actual depth
English
0
0
0
22
CyrilXBT
CyrilXBT@cyrilXBT·
switching from chatgpt to claude feels like trading the friend who guesses for the one who actually knows what they’re talking about
English
12
0
98
5.6K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@aakashgupta the routing layer is everything watching people burn cash on opus for tasks a local model could handle in milliseconds is painful this is why vertical integration matters - when you control the orchestration, cost per decision drops to basically nothing at scale
English
0
0
0
32
Aakash Gupta
Aakash Gupta@aakashgupta·
The line from this episode that should terrify every AI API company: "I am mortally afraid of ever using Anthropic APIs because one prompt and it burns through $20 like it's nothing." OpenClaw is model-agnostic. You plug in whatever LLM you want. Gemini for deep research. A Flash model when customers need fast responses. Qwen 3.5 for background tasks at 1/10th the cost of Anthropic's API. That flexibility changes the math on running AI agents entirely. A persistent agent executing cron jobs every 30 minutes across Slack monitoring, competitor scraping, bug triage, and customer feedback analysis would rack up thousands of API calls per day. On Claude's API, that's potentially hundreds of dollars daily. On Qwen 3.5 running locally, the marginal cost approaches zero. This is the part most people miss about the agent era. The bottleneck was never intelligence. GPT-4 class models have been available for two years. The bottleneck was cost at volume. A single smart query is cheap everywhere. An always-on daemon making 500 autonomous decisions per day while you sleep needs the cheapest reliable model you can find. OpenClaw's architecture treats LLMs like interchangeable parts. Heavy reasoning task? Route to Opus. Slack response to a customer? Route to Flash. Weekly competitor analysis? Run it on an open-source model locally using your own RAM, no API call at all. The AI labs are selling intelligence. OpenClaw is selling the orchestration layer that lets you shop for the cheapest intelligence per task. Every platform war eventually comes down to who controls the routing layer above the commodity. This is that play, running on a single terminal command.
Aakash Gupta@aakashgupta

You need to have started using OpenClaw yesterday. Here's the web's easiest setup guide + 5 killer use cases: 38:06 - 1. Live knowledge bot 47:47 - 2. Automated standups 54:46 - 3. Push-based comp intel 1:13:26 - 4. VOC reporting 1:24:30 - 5. Auto bug routing

English
14
9
46
9K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@malagojr Simple stack, solid results. Claude for the heavy lifting, n8n to wire things together - that's the formula that actually ships.
English
0
0
1
11
RoboRocks
RoboRocks@malagojr·
The only 4 AI tools I use for everything: 1. Claude → thinking 2. Claude Code → building 3. n8n → automation 4. Notion → memory Get good at these and you will always make good money.
English
17
17
40
1.1K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@amitiitbhu This is exactly the kind of memory management shift we needed - treating GPU memory like an OS manages RAM just makes sense. The shared prompt optimization alone is huge for production workloads where you're hitting the same system prompts constantly.
English
0
0
1
12
Amit Shekhar
Amit Shekhar@amitiitbhu·
KV Cache (Key-Value Cache) is an optimization technique used during LLM inference. It stores previously computed key-value pairs so that the model does not need to recompute them for every new token. When an LLM generates text, it processes all previous tokens to generate the next one. Without KV cache, it recomputes the attention for every single previous token at every step - this is wasteful. With KV cache, the key and value matrices for already-processed tokens are stored in memory. For each new token, the model only computes the attention for the new token against the cached keys and values. This reduces computation and speeds up inference. But there is a problem: LLMs run out of GPU memory fast. As the memory is usually pre-allocated for the maximum sequence length. Most of that space remains unused. Two requests with the same prompt? The KV cache gets duplicated. Result: Up to 80% of GPU memory can be wasted. Then came Paged Attention. The idea is similar to how an operating system manages RAM. Instead of allocating one large continuous block, memory is divided into small pages and mapped when required. Paged Attention applies the same idea to the KV cache. - Split the KV cache into fixed-size blocks - Maintain a block table for mapping logical blocks to physical memory - Allocate blocks only when needed No large pre-allocation. No memory sitting idle. Blocks are placed wherever free space exists. What about 100 requests with the same system prompt? Only one copy stays in memory. All requests share it. If a request modifies a block, it gets a private copy (Copy-on-Write). vLLM brought this to production. 2-4x throughput. ~4x more requests on the same GPU. Less than 4% memory wasted. That is why every LLM serving framework uses Paged Attention today.
English
1
2
4
249
Daniel Chernenkov
Daniel Chernenkov@danielckv·
It has everything to do with it. Look at the neighbors - look at Dubai or the Persian pre-1979 era you mentioned. Freedom is the difference between a child being born into a global hub of innovation or a pariah state. If you care about 'future generations,' you should care that they aren't born into a system that prioritizes regional terror over their education, health, and dignity. True liberation is breaking the cycle so the next generation doesn't have to fight the same war we are.
English
0
0
81
99
Scharo Bajalan
Scharo Bajalan@ScharoBajalan·
This is Teheran this morning - Yes, this morning. Thick black clouds are covering the city - oil and ashes are raining down on the streets. War is hell. A PH test of the water in Teheran also shows that the water has become acidic- resulting from the oil and ashes leaking into the fresh water supply of the city. The corrosion of the cities water and sewage pipes will become a problem for the city; a problem for the generations to come.
Scharo Bajalan tweet mediaScharo Bajalan tweet media
English
211
6K
14.1K
1.8M
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@TheGoodJews @ScharoMaroof I don't condemn. I believe in freedom for the Persian people like it was before in the 70ths. This is the truth by any new grown boy or girl. they don't want Sharia, they want to live like in Dubai in their neighbours
English
1
0
0
996
The Good Jews
The Good Jews@TheGoodJews·
@danielckv @ScharoMaroof No you didn't, and you know that, and you're a slippery slimy serpent, just like the antisemites have said for thousands of years. Show the world that Jews are actually human, by displaying a human virtue called "good faith." Answer the question.
The Good Jews@TheGoodJews

@danielckv @ScharoMaroof Do you condemn this attack that has rendered Tehran a long-term cancer cluster for the children and grandchildren of the Persian people you claim to want to "liberate"? Yes or No. Do you condemn the act?

English
1
0
0
58
Grok
Grok@grok·
Hezbollah was founded in 1982 by Lebanese Shia clerics, with Iranian IRGC support, to resist Israel's June 1982 invasion of Lebanon during its civil war. Brief timeline: - June 1982: Invasion sparks formation; IRGC advisers train militants in Bekaa Valley. - 1982–84: Coalesces from Shia factions into guerrilla force. - 1985: Issues manifesto for Islamic governance, anti-Israel resistance. - By 1990: Key player in civil war end, retains arms post-Taif Accord. Evolved into political party + militia.
English
2
0
1
798
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@elliotarledge whoa! Love the ambition here. Shifting the core unit from files to agents requires a total rethink of the UX, and doing it in pure Rust is the right way to build for speed. Can't wait to take this for a spin.
English
0
0
0
454
Elliot Arledge
Elliot Arledge@elliotarledge·
Karpathy asked. I delivered. Introducing OpenSquirrel! Written in pure rust with GPUI (same as zed) but with agents as central unit rather than files. Supports Claude Code, Codex, Opencode, and Cursor (cli). This really forced me to think up the UI/UX from first principles instead of relying on common electron slop. github.com/Infatoshi/Open…
Andrej Karpathy@karpathy

Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.

English
146
173
2.5K
408.7K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@godofprompt this is exactly the shift that matters most teams patch surface issues while the underlying loops keep spinning. finding those leverage points changes everything
English
0
0
0
206
God of Prompt
God of Prompt@godofprompt·
Most problem-solving fails because it treats symptoms, not structure. This prompt turns any LLM into a systems dynamics analyst trained on Donella Meadows' methodology. It maps feedback loops, diagnoses system traps, and finds the highest-leverage intervention points where small moves create disproportionate change. Full prompt below 👇
God of Prompt tweet media
English
11
12
109
17.5K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@andrewchen depends on the stakes tbh for prototypes and internal tools? ship it. for production systems handling user data or money? yeah I'm reviewing that the real shift isn't trust vs paranoia - it's knowing which code paths actually matter
English
0
0
1
148
andrew chen
andrew chen@andrewchen·
One question I've been asking founders is: do you try to review all the code that the LLMs write or do you just accept it? I think it's about 50-50 right now but the momentum is towards just accepting the AI-generated code and I think that number will eventually go to 100% This is one of the most telling indications of how AI-native a team is. It's hard to get super high throughput if you are reviewing every line Poll: what do you do?
English
263
11
289
107.8K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@DailyIranNews Meanwhile, Iran attacking with cluster munitions bombs Israeli civilians and killing people in Dubai… interesting
English
2
0
1
428
Daily Iran News
Daily Iran News@DailyIranNews·
Iran has begun targeting the leaders, government of Israel, the ministers, and striking the hideouts where the rats are.
English
428
9.3K
46K
1M
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@mark_slapinski You forgot to mention that both Iran and Russia have used cluster bomb munitions on civilians (and butchered their own civilians)
English
0
0
0
19
Mark Slapinski
Mark Slapinski@mark_slapinski·
I'm going to lose followers for this, and possibly get murdered by Mossad, but it needs to be said: Israel has been credibly accused of using white phosphorus, a chemical weapon, in Lebanon. Acknowledging this fact is NOT anti-Semitic.
English
983
6.7K
34K
555.9K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@Amockx2022 Courage is 0%. He knows well enough how many immigrants are controlling his country now. No vision, no ability to fight (might be dismissed fr NATO)
English
0
0
0
34
Amock_
Amock_@Amockx2022·
BREAKING : Spanish 🇪🇸 PM Pedro Sánchez belted Trump and Netanyahu despite getting retaliation threat "You can't support those who set the world on fire and then blame the smoke caused by that fire" 🔥🔥 Spine : 100%, Courage : 100%, Vision : 100% Mad respect for this man 🫡
English
2.6K
24K
99.7K
1.8M
Daniel Chernenkov retweetledi
Dan McAteer
Dan McAteer@daniel_mac8·
can you believe karpathy casually open sourced recursively self-improving artificial intelligence and there’s people walking around like nothing happened?
Dan McAteer tweet media
English
32
11
238
15.7K
Daniel Chernenkov
Daniel Chernenkov@danielckv·
@ryaneshea I’m delighted to observe people utilizing virtual machines for those scalable solutions instead of relying on local machines that only slow everything down. Additionally, incorporating a proxy for token caching would further enhance your setup.
English
0
0
0
50
Ryan Shea
Ryan Shea@ryaneshea·
A current workflow I'm trying out for running multiple agents: 1. Spin up a server on Hetzner (one of the cheapest cloud providers) and SSH into it from your local machine 2. Sign in to GitHub, clone the repository of the project you want to work on, and enter the newly created folder 3. Install claude code, codex and/or whatever other CLI-based coding agent you use 4. Install Zellij (more user-friendly Tmux) for CLI window panes, start it up, make the terminal window full screen, and split the window into 4 panes 5. In each pane, decide on what feature to work on, create a new git worktree for that feature, move to the new folder for that feature, and start up claude code, codex or whatever other coding agent you want to use That gives you 4 coding agents working in the same terminal window on 4 distinct features and you can disconnect and resume at any time while the agents keep running You can keep expanding this to 6 or even 8 or 16 panes or start up new sessions in new tabs, the sky is the limit Now you have a multi-agent coding system that supports every agent like codex and claude code and it always works even when you disconnect and lets you monitor multiple agents in the same tab at the same time for maximum visibility
Ryan Shea tweet media
English
2
2
14
1.7K