vertigo

454 posts

vertigo banner
vertigo

vertigo

@StuckInsideACan

programmer, former $sc ⛏️, hacker and sceptic. more knowledge, less understanding 🐳

Earth 가입일 Mart 2017
568 팔로잉324 팔로워
vertigo
vertigo@StuckInsideACan·
@mrbeltandwezol Ever considered an SLD gettin out, enought classic disco vibes?
English
0
0
0
2
vertigo
vertigo@StuckInsideACan·
@toddsaunders The prompt is way too loose, realistically it should probably be around 800 grand.
vertigo tweet media
English
0
0
1
354
Todd Saunders
Todd Saunders@toddsaunders·
Got flooded with DMs asking for the markdown file. Here it is, bookmark this. --- name: cost-estimate description: Estimate development cost of a codebase based on lines of code and complexity --- # Cost Estimate Command You are a senior software engineering consultant tasked with estimating the development cost of the current codebase. ## Step 1: Analyze the Codebase Read the entire codebase to understand: - Total lines of code (Swift, C++, Metal shaders) - Architectural complexity (frameworks, integrations, APIs) - Advanced features (Metal rendering, CoreMediaIO, AVFoundation) - Testing coverage - Documentation quality Use the Glob and Read tools to systematically review: - All Swift source files in Sources/ - All C++ files in DALPlugin/ - All Metal shader files - All test files in Tests/ - Build scripts and configuration files ## Step 2: Calculate Development Hours Based on industry standards for a **senior full-stack developer** (5+ years experience): **Hourly Productivity Estimates**: - Simple CRUD/UI code: 30-50 lines/hour - Complex business logic: 20-30 lines/hour - GPU/Metal programming: 10-20 lines/hour - Native C++ interop: 10-20 lines/hour - Video/audio processing: 10-15 lines/hour - System extensions/plugins: 8-12 lines/hour - Comprehensive tests: 25-40 lines/hour **Additional Time Factors**: - Architecture & design: +15-20% of coding time - Debugging & troubleshooting: +25-30% of coding time - Code review & refactoring: +10-15% of coding time - Documentation: +10-15% of coding time - Integration & testing: +20-25% of coding time - Learning curve (new frameworks): +10-20% for specialized tech **Calculate total hours** considering: 1. Base coding hours (lines of code / productivity rate) 2. Multipliers for complexity and overhead 3. Phases completed vs. remaining 4. Specialized knowledge required (CoreMediaIO, Metal, etc.) ## Step 3: Research Market Rates Use WebSearch to find current 2025 hourly rates for: - Senior full-stack developers (5-10 years experience) - Specialized iOS/macOS developers - Contractors vs. employees - Geographic variations (US markets: SF Bay Area, NYC, Austin, Remote) Search queries to use: - "senior full stack developer hourly rate 2025" - "macOS Swift developer contractor rate 2025" - "senior software engineer hourly rate United States 2025" - "iOS developer freelance rate 2025" ## Step 4: Calculate Organizational Overhead Real companies don't have developers coding 40 hours/week. Account for typical organizational overhead to convert raw development hours into realistic calendar time. **Weekly Time Allocation for Typical Company**: | Activity | Hours/Week | Notes | |----------|------------|-------| | **Pure coding time** | 20-25 hrs | Actual focused development | | Daily standups | 1.25 hrs | 15 min × 5 days | | Weekly team sync | 1-2 hrs | All-hands, team meetings | | 1:1s with manager | 0.5-1 hr | Weekly or biweekly | | Sprint planning/retro | 1-2 hrs | Per week average | | Code reviews (giving) | 2-3 hrs | Reviewing teammates' work | | Slack/email/async | 3-5 hrs | Communication overhead | | Context switching | 2-4 hrs | Interruptions, task switching | | Ad-hoc meetings | 1-2 hrs | Unplanned discussions | | Admin/HR/tooling | 1-2 hrs | Timesheets, tools, access requests | **Coding Efficiency Factor**: - **Startup (lean)**: 60-70% coding time (~24-28 hrs/week) - **Growth company**: 50-60% coding time (~20-24 hrs/week) - **Enterprise**: 40-50% coding time (~16-20 hrs/week) - **Large bureaucracy**: 30-40% coding time (~12-16 hrs/week) **Calendar Weeks Calculation**: ``` Calendar Weeks = Raw Dev Hours ÷ (40 × Efficiency Factor) ``` Example: 3,288 raw dev hours at 50% efficiency = 3,288 ÷ 20 = **164.4 weeks** (~3.2 years) ## Step 5: Calculate Full Team Cost Engineering doesn't ship products alone. Calculate the fully-loaded team cost including all supporting roles. **Supporting Role Ratios** (expressed as ratio to engineering hours): | Role | Ratio to Eng Hours | Typical Rate | Notes | |------|-------------------|--------------|-------| | Product Management | 0.25-0.40× | $125-200/hr | PRDs, roadmap, stakeholder mgmt | | UX/UI Design | 0.20-0.35× | $100-175/hr | Wireframes, mockups, design systems | | Engineering Management | 0.12-0.20× | $150-225/hr | 1:1s, hiring, performance, strategy | | QA/Testing | 0.15-0.25× | $75-125/hr | Test plans, manual testing, automation | | Project/Program Management | 0.08-0.15× | $100-150/hr | Schedules, dependencies, status | | Technical Writing | 0.05-0.10× | $75-125/hr | User docs, API docs, internal docs | | DevOps/Platform | 0.10-0.20× | $125-200/hr | CI/CD, infra, deployments | **Team Composition by Company Stage**: | Stage | PM | Design | EM | QA | PgM | Docs | DevOps | |-------|-----|--------|-----|-----|------|------|--------| | Solo/Founder | 0% | 0% | 0% | 0% | 0% | 0% | 0% | | Lean Startup | 15% | 15% | 5% | 5% | 0% | 0% | 5% | | Growth Company | 30% | 25% | 15% | 20% | 10% | 5% | 15% | | Enterprise | 40% | 35% | 20% | 25% | 15% | 10% | 20% | **Full Team Multiplier**: - **Solo/Founder**: 1.0× (just engineering) - **Lean Startup**: ~1.45× engineering cost - **Growth Company**: ~2.2× engineering cost - **Enterprise**: ~2.65× engineering cost **Calculation**: ``` Full Team Cost = Engineering Cost × Team Multiplier ``` Example: $500K engineering cost at Growth Company = $500K × 2.2 = **$1.1M total team cost** ## Step 6: Generate Cost Estimate Provide a comprehensive estimate in this format: --- ## KeyMe MVP - Development Cost Estimate **Analysis Date**: [Current Date] **Codebase Version**: [From CLAUDE.md phase status] ### Codebase Metrics - **Total Lines of Code**: [number] - Swift: [number] lines - C++: [number] lines - Metal Shaders: [number] lines - Tests: [number] lines - Documentation: [number] lines - **Complexity Factors**: - Advanced frameworks: [list key ones] - System-level programming: [Camera Extensions, DAL Plugins, etc.] - GPU programming: [Metal shaders, rendering] - Third-party integrations: [OpenAI, etc.] ### Development Time Estimate **Base Development Hours**: [number] hours - Phase 1 (Foundation): [hours] hours - Phase 2 (Virtual Camera): [hours] hours - Phase 3 (Audio/Transcription): [hours] hours - Remaining phases: [hours] hours **Overhead Multipliers**: - Architecture & Design: +[X]% ([hours] hours) - Debugging & Troubleshooting: +[X]% ([hours] hours) - Code Review & Refactoring: +[X]% ([hours] hours) - Documentation: +[X]% ([hours] hours) - Integration & Testing: +[X]% ([hours] hours) - Learning Curve: +[X]% ([hours] hours) **Total Estimated Hours**: [number] hours ### Realistic Calendar Time (with Organizational Overhead) | Company Type | Efficiency | Coding Hrs/Week | Calendar Weeks | Calendar Time | |--------------|------------|-----------------|----------------|---------------| | Solo/Startup (lean) | 65% | 26 hrs | [X] weeks | ~[X] months | | Growth Company | 55% | 22 hrs | [X] weeks | ~[X] years | | Enterprise | 45% | 18 hrs | [X] weeks | ~[X] years | | Large Bureaucracy | 35% | 14 hrs | [X] weeks | ~[X] years | **Overhead Assumptions**: - Standups, team syncs, 1:1s, sprint ceremonies - Code reviews (giving), Slack/email, ad-hoc meetings - Context switching, admin/tooling overhead ### Market Rate Research **Senior Full-Stack Developer Rates (2025)**: - Low end: $[X]/hour (remote, mid-level market) - Average: $[X]/hour (standard US market) - High end: $[X]/hour (SF Bay Area, NYC, specialized) **Recommended Rate for This Project**: $[X]/hour *Rationale*: This project requires specialized macOS development skills (CoreMediaIO, Metal, system extensions) which command premium rates. ### Total Cost Estimate | Scenario | Hourly Rate | Total Hours | **Total Cost** | |----------|-------------|-------------|----------------| | Low-end | $[X] | [hours] | **$[X,XXX]** | | Average | $[X] | [hours] | **$[X,XXX]** | | High-end | $[X] | [hours] | **$[X,XXX]** | **Recommended Estimate (Engineering Only)**: **$[X,XXX] - $[X,XXX]** ### Full Team Cost (All Roles) | Company Stage | Team Multiplier | Engineering Cost | **Full Team Cost** | |---------------|-----------------|------------------|-------------------| | Solo/Founder | 1.0× | $[X] | **$[X]** | | Lean Startup | 1.45× | $[X] | **$[X]** | | Growth Company | 2.2× | $[X] | **$[X]** | | Enterprise | 2.65× | $[X] | **$[X]** | **Role Breakdown (Growth Company Example)**: | Role | Hours | Rate | Cost | |------|-------|------|------| | Engineering | [X] hrs | $[X]/hr | $[X] | | Product Management | [X] hrs | $[X]/hr | $[X] | | UX/UI Design | [X] hrs | $[X]/hr | $[X] | | Engineering Management | [X] hrs | $[X]/hr | $[X] | | QA/Testing | [X] hrs | $[X]/hr | $[X] | | Project Management | [X] hrs | $[X]/hr | $[X] | | Technical Writing | [X] hrs | $[X]/hr | $[X] | | DevOps/Platform | [X] hrs | $[X]/hr | $[X] | | **TOTAL** | **[X] hrs** | | **$[X]** | ### Grand Total Summary | Metric | Solo | Lean Startup | Growth Co | Enterprise | |--------|------|--------------|-----------|------------| | Calendar Time | [X] | [X] | [X] | [X] | | Total Human Hours | [X] | [X] | [X] | [X] | | **Total Cost** | **$[X]** | **$[X]** | **$[X]** | **$[X]** | ### Assumptions 1. Rates based on US market averages (2025) 2. Full-time equivalent allocation for all roles 3. Includes complete implementation of MVP features 4. Does not include: - Marketing & sales - Legal & compliance - Office/equipment - Hosting/infrastructure - Ongoing maintenance post-launch ### Comparison: AI-Assisted Development **Estimated time savings with Claude Code**: [X]% **Effective hourly rate with AI assistance**: ~$[X]/hour equivalent productivity ## Step 7: Calculate Claude ROI — Value Per Claude Hour This is the most important metric for understanding AI-assisted development efficiency. It answers: **"What did each hour of Claude's actual working time produce?"** ### 7a: Determine Actual Claude Clock Time **Method 1: Git History (preferred)** Run `git log --format="%ai" | sort` to get all commit timestamps. Then: 1. **First commit** = project start 2. **Last commit** = current state 3. **Total calendar days** = last - first 4. **Cluster commits into sessions**: group commits within 4-hour windows as one session 5. **Estimate session duration**: each session ≈ 1-4 hours of active Claude work (use commit density as signal — many commits = longer session) **Session Duration Heuristics**: - 1-2 commits in a window → ~1 hour session - 3-5 commits in a window → ~2 hour session - 6-10 commits in a window → ~3 hour session - 10+ commits in a window → ~4 hour session **Method 2: File Modification Timestamps (no git)** Use `find . -name "*.ts" -o -name "*.swift" -o -name "*.py" | xargs stat -f "%Sm" | sort` to get file mod times. Apply same session clustering logic. **Method 3: Fallback Estimate** If no reliable timestamps, estimate from lines of code: - Assume Claude writes 200-500 lines of meaningful code per hour (much faster than humans) - Claude active hours ≈ Total LOC ÷ 350 ### 7b: Calculate Value per Claude Hour ``` Value per Claude Hour = Total Code Value (from Step 5) ÷ Estimated Claude Active Hours ``` Calculate across scenarios: | Code Value Scenario | Claude Hours (est.) | Value per Claude Hour | |--------------------|--------------------|-----------------------| | Engineering only (avg) | [X] hrs | **$[X,XXX]/hr** | | Full team equivalent (Growth Co) | [X] hrs | **$[X,XXX]/hr** | | Full team equivalent (Enterprise) | [X] hrs | **$[X,XXX]/hr** | ### 7c: Claude Efficiency vs. Human Developer **Speed Multiplier**: ``` Speed Multiplier = Human Dev Hours ÷ Claude Active Hours ``` Example: If a human would need 500 hours but Claude did it in 20 hours → 25× faster **Cost Efficiency**: ``` Human Cost = Human Hours × $150/hr Claude Cost = Subscription ($20-200/month) + API costs (estimate from project size) Savings = Human Cost - Claude Cost ROI = Savings ÷ Claude Cost ``` ### 7d: Output Format Add this section to the final report: --- ### Claude ROI Analysis **Project Timeline**: - First commit / project start: [date] - Latest commit: [date] - Total calendar time: [X] days ([X] weeks) **Claude Active Hours Estimate**: - Total sessions identified: [X] sessions - Estimated active hours: [X] hours - Method: [git clustering / file timestamps / LOC estimate] **Value per Claude Hour**: | Value Basis | Total Value | Claude Hours | $/Claude Hour | |-------------|-------------|--------------|---------------| | Engineering only | $[X] | [X] hrs | **$[X,XXX]/Claude hr** | | Full team (Growth Co) | $[X] | [X] hrs | **$[X,XXX]/Claude hr** | **Speed vs. Human Developer**: - Estimated human hours for same work: [X] hours - Claude active hours: [X] hours - **Speed multiplier: [X]×** (Claude was [X]× faster) **Cost Comparison**: - Human developer cost: $[X] (at $150/hr avg) - Estimated Claude cost: $[X] (subscription + API) - **Net savings: $[X]** - **ROI: [X]×** (every $1 spent on Claude produced $[X] of value) **The headline number**: *Claude worked for approximately [X] hours and produced the equivalent of $[X] in professional development value — roughly **$[X,XXX] per Claude hour**.* --- --- ## Notes Present the estimate in a clear, professional format suitable for sharing with stakeholders. Include confidence intervals and key assumptions. Highlight areas of highest complexity that drive cost.
Todd Saunders@toddsaunders

Fun command built in Claude Code: /cost-estimate It scans your codebase and cross-references current market rates to calculate what your project would've cost a real team to build. It looks at all the APIs, integrations, everything. Without AI: ~2.8 years. ~$650k. With AI: 30 hours. It's absurd when you start to think about it like this.

English
48
109
1.5K
279.1K
vertigo
vertigo@StuckInsideACan·
@SevenviewSteve Now try phoenix with live view with pubsub for real time upgrades natively and never look back
English
0
0
1
223
Steve Clarke
Steve Clarke@SevenviewSteve·
I had Claude Code build the same UI in 5 different stacks. React, Hotwire, Inertia+Vue, Inertia+React, and vanilla HTML/JS. Same spec, same features. The quality gap between frameworks was massive. Wrote up what happened and what I think it means for picking a stack in 2026. x.com/SevenviewSteve…
English
40
23
438
171.7K
vertigo
vertigo@StuckInsideACan·
@cursor_ai request, change order in prompt queue with chevron up down or drag and drop
English
0
0
0
23
vertigo 리트윗함
atulit
atulit@atulit_gaur·
@atulit23/stable-diffusion-i-mathematics-behind-it-2957f5839e80" target="_blank" rel="nofollow noopener">medium.com/@atulit23/stab…
ZXX
0
11
127
7.2K
vertigo 리트윗함
Rohan Paul
Rohan Paul@rohanpaul_ai·
LLMs struggle with performance in Retrieval Augmented Generation (RAG) when many documents are retrieved. Prior studies did not isolate if this is due to document quantity versus context length. This paper investigates how the number of documents affects LLM performance, while keeping context length constant. → The paper creates custom datasets from a multi-hop Question Answering dataset. → Document count is varied while context length and relevant information positions are kept constant. → Distractor documents are removed, and remaining documents are extended to maintain fixed context length. → Evaluations using Llama-3.1, Qwen2, and Gemma2 models show that increasing document count often degrades performance by up to 10 percent. → Qwen2 model is less impacted by document count. → Adding random, unrelated documents improves performance, unlike adding related but distracting documents. ---------------------------- Paper - arxiv. org/abs/2503.04388v1 Paper Title: "More Documents, Same Length: Isolating the Challenge of Multiple Documents in RAG"
Rohan Paul tweet media
English
2
7
32
4K
vertigo 리트윗함
Matthew Berman
Matthew Berman@MatthewBerman·
Major AI breakthrough: Diffusion Large Language Models are here! They're 10x faster and 10x cheaper than traditional LLMs. Here's everything you need to know:
English
149
388
3K
429K
vertigo 리트윗함
Rohan Paul
Rohan Paul@rohanpaul_ai·
LLMs hallucinate, mixing truthful and false information. This makes factuality alignment noisy during training because response-level preference learning cannot isolate factual errors. This paper introduces Mask-DPO. It uses sentence-level factuality masks during Direct Preference Optimization. Mask-DPO learns only from factually correct parts of preferred responses. It avoids penalizing factual content in dispreferred responses. 📌 Sentence-level masks in Mask-DPO refine preference learning. This method reduces noise from mixed factuality responses. 📌 Mask-DPO's fine-grained DPO significantly boosts factuality and generalization. It achieves this by reducing noisy feedback. ---------- Methods Explored in this Paper 🔧: → Mask-DPO uses sentence-level factuality annotations as masks. → It constructs preference pairs by ranking responses based on their sentence-level factuality scores. → During training, Mask-DPO applies masks to the DPO loss function. → These masks ignore incorrect sentences in preferred responses and correct sentences in dispreferred responses. → This fine-grained approach focuses learning on factual correctness at the sentence level. ---------------------------- Paper - arxiv. org/abs/2503.02846 Paper Title: "Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs"
Rohan Paul tweet media
English
0
1
12
1.6K
vertigo
vertigo@StuckInsideACan·
@ThatArrowsmith @elixirlang @elixirphoenix Works fine if you tell it to use the latest versions ans tag the documenation @docs then add phoenix and phoenix live.. uses all the latest syntax.. of course the fifo issue remains so you periodically have to remind it.. the beta featurea to manage the context doesnt really help
English
0
0
0
165
George Millo
George Millo@georgemillo·
Claude totally sucks at writing Phoenix - it still uses all the old pre-1.7 syntax like `form_for`, `Routes` etc.. Working around this by writing detailed explanations of the new syntax in my cursorrules but it's a PITA... is there a better way? @elixirlang @elixirphoenix
English
17
3
58
4.7K
GREG ISENBERG
GREG ISENBERG@gregisenberg·
This chart is nuts. Software developer jobs down 70% from peak. People will blame the end of free money. But something way more interesting is happening. The middle class engineer is dying. And it's dying because they're not needed anymore. One good dev with Github Copilot ships what entire teams did five years ago. Microsoft just reported the highest revenue per employee in history. The "entry-level engineer" doesn't exist anymore. Instead, we have product builders who happen to code. Armed with AI, they ship entire products in days. Meanwhile, the truly elite engineers are making more money than ever. And they've shifted to working mostly on frontier tech. I mean the stuff that's really hard. AGI at OpenAI. Designing rockets at SpaceX. Self-driving car tech at Tesla. Product builders are becoming solopreneurs and creators Frontier engineers are making hedge fund money In 2025, "software engineer" doesn't mean what it meant in 2020. And that's what this chart really shows. The middle is gone. The top is elite status. And everyone else is becoming a builder.
GREG ISENBERG tweet media
English
1.2K
2K
15K
4.2M
vertigo 리트윗함
Thomas Dohmke
Thomas Dohmke@ashtom·
Today, we are infusing the power of agentic AI into the GitHub Copilot experience, elevating Copilot from pair to peer programmer 🤖 (1/4) github.blog/news-insights/…
English
251
727
4.8K
1.4M
vertigo
vertigo@StuckInsideACan·
@cursor_ai please; - display when the limits will be reset in the account page - make it more intuitive when composer is applying changes or asking for permission (its kind of vague now sometimes) - any chance of accepting o3-mini-high on byo keys?
English
0
0
0
94
vertigo
vertigo@StuckInsideACan·
@7etsuo @tsotchke Nice job cherrypicking favorable test cases. Factoring in all the messy bits—hardware integration, error correction, scaling up—those numbers don’t add up. It’s smoke and mirros, and it won’t deliver.. #scam
English
3
0
1
256
vertigo
vertigo@StuckInsideACan·
@shaoruu @cursor_ai @ryolu_ Great work! Also more powerful models on composer and multiple agents mode (expert pool) with bring your own keys support , yes please!
English
0
0
2
104
vertigo
vertigo@StuckInsideACan·
@chris_mccord Have you seen it produce unittests for live views with some actual coverage? Seems composer agent mode in cursor had quite some diffculties with those, have you had better results? Sample I tried was a csv parsing form, fully functional, yet quite often vunerable to regression..
English
0
0
0
80
Chris McCord
Chris McCord@chris_mccord·
So you can just give it a postgres URL (or sqlite db file) and it will just... explore it and offer to build things. Even with a crazy nonstandard (from Ecto perspective) database. The search bar crashed, I sent the logs to the agent, and boom realtime searching
English
14
11
154
12.7K
vertigo 리트윗함
Quantum Security Defence
Quantum Security Defence@QSECDEF·
New paper outlines the DH method for Quantum Computers, suggesting that there will be quantum demand for Diffie-Hellman Cyrptography in the post quantum world. There are already teams working on post Quantum Cryptography such as the Knot Diffie-Hellman from @quantdotbond which potentially offer a faster platform to secure Web3 transactions in a post Quantum world. The paper follows. arxiv.org/pdf/2501.09568
English
3
15
157
501.7K
vertigo 리트윗함
Paulo A. Viana
Paulo A. Viana@pauloaviana·
With the release of $KNOT getting closer, the moment couldn't be better to announce our plan to integrate AI support to $KNOT. With the advent of AI being more present in cybersecurity, this would be essential for real-world implementations. This would help us during the second phase of $KNOT, that is, optimization of the implementation and further development by detecting security problems and anomalies. We will aim to use AIs commonly used for cryptographic verification and integration, and also new ones implemented by our team specifically for $KNOT, with the aim to integrate it and avoid vulnerabilities. The use of AI will be essential to identify potential cryptographic threats that may arrive with the advancement of quantum research.
English
3
8
32
1.8K
vertigo 리트윗함
Paulo A. Viana
Paulo A. Viana@pauloaviana·
It will be amazing to be able to speak about $KNOT and @quantdotbond in the biggest Quantum Summit of the world!
Quant Bond@quantdotbond

We're heading to the Quantum Innovation Summit 2025! 🚀 This premier event brings together industry giants like Microsoft, IBM, D-Wave, Nvidia, and more. With @pauloaviana and @VOTSoul1929 representing us, we’re ready to: 💡 Connect with leaders shaping the quantum world 🤝 Explore partnerships and investment opportunities 🌐 Showcase Quant.Bond to the global quantum community The future of $KNOT and Quantum DeSci has begun. Stay tuned for updates from the summit floor!

English
2
6
29
2.5K
vertigo 리트윗함
Quant Bond
Quant Bond@quantdotbond·
We're heading to the Quantum Innovation Summit 2025! 🚀 This premier event brings together industry giants like Microsoft, IBM, D-Wave, Nvidia, and more. With @pauloaviana and @VOTSoul1929 representing us, we’re ready to: 💡 Connect with leaders shaping the quantum world 🤝 Explore partnerships and investment opportunities 🌐 Showcase Quant.Bond to the global quantum community The future of $KNOT and Quantum DeSci has begun. Stay tuned for updates from the summit floor!
Quant Bond tweet media
English
12
16
54
6.8K