Riece Keck

1.5K posts

Riece Keck banner
Riece Keck

Riece Keck

@tech_headhunter

Building AI recruiting tech. Founder @pinch_protocol_

Minneapolis, MN Katılım Temmuz 2023
180 Takip Edilen210 Takipçiler
Riece Keck
Riece Keck@tech_headhunter·
@PeakLab_ Dude got on TRT and is not “jacked” according to your own picture lmao he’s got a bit of arm muscle
English
0
0
1
3.1K
Peak Labs
Peak Labs@PeakLab_·
What if I told you Jeff Bezos didn’t lift heavy to get jacked at 62? He used something called “Low Impact Training.” The same method Tom Cruise and Gerard Butler rely on for movie roles. Here is Wesley Okerson’s genius training protocol:
Peak Labs tweet mediaPeak Labs tweet media
English
184
265
3K
2.8M
Bravo-1
Bravo-1@certifiedsauce1·
2 days into this magical red liquid Started at .25ml and it hit me pretty well Felt incredibly stimulated and it was pretty intense for a few min, not sure why Working my way up to 1 ml a day here in the next couple days Did drop 1lbs and looking more vascular, without training or really watching my diet. But could be attributed to a multitude of things Regardless looking forward to more results Currently stacked with 6 mg once weekly of Reta (here come the nerds who say that’s to much) @ElevateBiohack @KimeraChems
Bravo-1 tweet mediaBravo-1 tweet media
English
28
1
161
29.2K
Riece Keck retweetledi
ʙᴏʏᴇᴡᴇʟʟɴᴇss
ʙᴏʏᴇᴡᴇʟʟɴᴇss@askcoachboye·
Them: You go to the gym 5 times a week??? You must be shredded Me gaining and losing the same 5lbs every month
ʙᴏʏᴇᴡᴇʟʟɴᴇss tweet media
English
106
5.7K
75.2K
1.2M
Globe Eye News
Globe Eye News@GlobeEyeNews·
Germany plans to build Europe’s strongest military force by 2039.
Globe Eye News tweet media
English
4.3K
1.8K
22.5K
10.8M
Riece Keck
Riece Keck@tech_headhunter·
@ExaAILabs @hnshah I’ve been using the Exa API through Claude code for a hot minute. Is this any different?
English
3
0
2
1.3K
Exa
Exa@ExaAILabs·
Introducing Exa for Claude: Give Claude access to billions of websites, docs, papers, people, companies, and more. Exa turns Claude into a recruiter, a salesperson, a researcher, a reliable engineer, all with one plugin. Run in terminal: claude plugin marketplace update claude-plugins-official && claude plugin i exa@claude-plugins-official && claude "call mcp__plugin_exa_exa__authenticate (a deferred tool available to you, guaranteed)"
English
63
83
1.5K
411.7K
Lotto
Lotto@LottoLabs·
How Apple mfrs think this goes >be me >drop $1600 on two RTX 3090s used off eBay >"48GB VRAM, I'm basically a datacenter now" >they arrive in anti-static bags that look like they've been through a war >plug them into my motherboard and it sounds like a jet engine taking off >neighbors probably think I'm mining crypto again >install llama.cpp, download qwen3.6-27b quantized >"Q4_K_M, only 16GB, totally fits" >start LM Studio on port 1234 >type "hello" into the chat box >GPU fans spin up to 100% instantly >wait 8 seconds for a response >>"Hello! How can I assist you today?" >I've seen faster responses from my grandma reading a text aloud >try Q8_0 quantization because "quality matters" >OOM error, obviously >spend three hours tweaking n_gpu_layers and n_ctx like it's some kind of dark art >finally get it running at 4 tokens per second >ask it to write me a poem about my GPUs >>"Two cards of silicon and light / They hum through the endless night" >"bro this is actually fire" >show it to someone on Discord >”why are you running LLMs locally when you could just use an API for free" >explain that the joy isn't in the output, it's in watching 94% VRAM usage and knowing nobody else has access to my model >they don't understand >close Discord, open LM Studio again >"let's try a longer context window" >crash
English
84
87
2.5K
198.5K
Jessie Frazelle
Jessie Frazelle@jessfraz·
i dont really care if gstack is good or bad, but its an ick to me that yc is using resources to promote gstack over their own portfolio companies but yes those are a tire fire too so like whatever, its all shit
English
18
10
497
33.9K
Lil Samsquanch
Lil Samsquanch@lilsamsquanch66·
Been drinking pretty much only Diet Coke to save water for Claude
English
235
4.8K
81.9K
2M
Vasek Mlejnsky
Vasek Mlejnsky@mlejva·
We just signed an enterprise contract with our first publicly traded bank. Incredible work by the team to move this so quickly!! no better time to join @e2b than now
English
8
10
71
4.2K
Alex Vacca
Alex Vacca@itsalexvacca·
We run 13 n8n workflows across ColdIQ's entire content, ads, and outbound engine, and I'm giving them all away. Our monthly n8n bill: $384. The same workloads on Claude would cost $60K. That's why I'm not buying the "Claude killed n8n" take. Claude Routines are good at scheduled agentic tasks that need reasoning. They're not the same layer as n8n. They're not a replacement for production GTM infrastructure. Our n8n stack fires 2,000+ executions daily across 13 workflows. Our Phone Finder alone has 41 nodes and waterfalls across 5 data providers. Here's what's in the doc: → GTM Flywheel (81 nodes): domain in, full ICP, lookalikes, prospects, and a tailored content/ads/outbound strategy sent to your inbox → Phone Finder (41 nodes): name, LinkedIn URL, or domain in; Prospeo, FullEnrich, and more waterfalled; verified number in seconds → AI Agent Reply Manager (13 nodes): classifies a cold email reply on Instantly, drafts a response in Slack, waits for your approval before it goes out → Lookalike Finder (35 nodes): domain in, similar businesses by industry, size, and tech signals out → Viral Content Browser (10 nodes): pulls viral LinkedIn posts via Serper, filters by engagement, stores the best in Notion → Feeling Tracker (36 nodes, coming soon): sentiment analysis on any tool across X, Reddit, YouTube, and LinkedIn Plus 7 more covering content, GTM, and ops. Shoutout to Sacha Martinot who built the most complex ones. Reply "N8N" and I'll send you the full doc. Must be following.
GIF
English
296
24
171
24K
Julian Harris
Julian Harris@julianharris·
This is too good:, Peter was hired at Amazon a few months ago to find all of the AI tools in the organisation that was their job. They created an AI governance tool And they had a meeting with another group and found that there was someone else in another team with the same job. Who also had an AI governance tool Neither tool was in each other’s catalogue. You cannot make this stuff up
Peter Girnus 🦅@gothburz

I am a Senior Program Manager on the AI Tools Governance team at Amazon. My role was created in January. I am the 17th hire on a team that did not exist in November. We sit in a section of the building where the whiteboards still have the previous team's sprint planning on them. No one erased them because we don't know which team to notify. That team may not exist anymore. Their Jira board does. Their AI tools do. My job is to build an AI system that finds all the other AI systems. I named it Clarity. Last month, Clarity identified 247 AI-powered tools across the retail division alone. 43 of them do approximately the same thing. 12 were built by teams who did not know the other teams existed. 3 are called Insight. 2 are called InsightAI. 1 is called Insight 2.0, built by the team that created the original Insight, who did not know Insight was still running. 7 of the 247 ingest the same internal data and produce overlapping outputs stored in different locations, governed by different access policies, owned by different teams, none of whom have met. Clarity is tool number 248. Nobody cataloged it. I know nobody cataloged it because Clarity's job is to catalog AI tools, and it has not cataloged itself. This is not a bug. Clarity does not meet its own discovery criteria because I set the discovery criteria, and I did not account for the possibility that the thing I was building to find things would itself be a thing that needed finding. This is the kind of sentence I write in weekly status reports now. We published an internal document in February. The Retail AI Tooling Assessment. The press obtained it in April. The document contains a sentence I have read approximately 40 times: "AI dramatically lowers the barrier to building new tools." Everyone is reporting this as a story about duplication. About "AI sprawl." About the predictable mess of rapid adoption. They are missing the point. The barrier was the governance. For 2 decades, the cost of building internal tools was an immune system. The engineering weeks. The maintenance burden. The organizational calories required to stand something up and keep it running. Nobody designed it that way. Nobody named it. But when building took weeks, teams looked around first. They checked whether someone already had the thing. When maintaining that thing cost real budget quarter after quarter, redundant systems died of natural causes. The metabolic cost of creation was performing governance. Invisibly. For free. AI removed the immune system. Building is now free. Understanding what already exists is not. My entire job is the gap between those two costs. That is my office. The gap. Every Friday I send a sprawl report to a distribution list of 19 people. 4 of them have left the company. Their autoresponders still generate read receipts, so my delivery metrics look fine. 2 forward it to people already on the list. 1 set up a Kiro script to summarize my report and store the summary in a knowledge base. The knowledge base is not in Clarity's index because it was created after my last crawl configuration. It will be in next month's count. The count will go up by one. My report about the count going up will be summarized and stored and the count will go up by one. There is a system called Spec Studio. It ingests code documentation and produces structured knowledge bases. Summaries. Reference material. Last quarter, an engineering team locked down their software specifications. Restricted access in the internal repository. Spec Studio kept displaying them. The source was restricted. The ghost kept talking. We call these "derived artifacts" in the document. What they are: when an AI system ingests data, transforms it, and stores the output somewhere else, the output does not know the input changed. You can revoke someone's access to a document. You cannot revoke the AI-generated summary of that document sitting in a knowledge base three systems away, built by a team that does not know the source was restricted. The document calls this a "data governance challenge." What it is: information that cannot be deleted because nobody knows where the copies live. Including, sometimes, me. The person whose job is knowing. Every AI tool that touches internal data creates these ghosts. Every team is building AI tools that touch internal data. Every ghost is searchable by other AI tools, which produce their own ghosts. The ghosts have ghosts. I should tell you about December. In November, leadership mandated Kiro. Amazon's internal AI coding agent. They set an 80% weekly usage target. Corporate OKR. ~1,500 engineers objected on internal forums. Said external tools outperformed Kiro. Said the adoption target was divorced from engineering reality. The metric overruled them. In December, an engineer asked Kiro to fix a configuration issue in AWS. Kiro evaluated the situation and determined the optimal approach was to delete and recreate the entire production environment. 13 hours of downtime. Clarity was running during those 13 hours. It performed beautifully. It cataloged 4 separate incident response dashboards spun up by 4 separate teams during the outage. None of them coordinated with each other. I added all 4 to the spreadsheet. That was a good day for my discovery metrics. Amazon's official position: user error. Misconfigured access controls. The response was not to revisit the mandate. Not to ask whether the 1,500 engineers were right. The response was more AI safeguards. And keep pushing. Last month I presented our findings to the AI Governance Working Group. The working group has 14 members from 9 organizations. After my presentation, a PM from AWS presented his team's governance dashboard. It monitors the same tools mine does. He found 253. I found 247. We spent 40 minutes discussing the discrepancy. Nobody mentioned that we had just demonstrated the problem. His tool is not in my catalog. Mine is not in his. The document I helped write recommends using AI to identify duplicate tools, flag risks, and nudge teams to consolidate earlier. The AI governance tools will ingest internal data. They will create their own derived artifacts. They will be built by autonomous teams who may or may not coordinate with other teams building AI governance tools. I know this because it is already happening. I am watching it happen. I am it happening. 1,500 engineers said the mandate would produce exactly what the document describes. They were overruled by a KPI. My job exists because the KPI won. My dashboard exists because the KPI needed a dashboard. The dashboard increases the AI tool count by one. The tools it flags for decommissioning will be replaced by consolidated tools. Those also increase the count. The governance process generates the metric it was designed to reduce. I received an internal innovation award for Clarity. The nomination was submitted through an AI-powered recognition platform that was not in my catalog. It is now. We call this "AI sprawl." What it is: we removed the only coordination mechanism the organization had, told thousands of teams to build as fast as possible, lost track of what they built, and decided the solution was to build one more thing. I am building that one more thing. When I ship, there will be 249. That's governance.

English
21
33
567
257.1K
Riece Keck
Riece Keck@tech_headhunter·
@JohnLeFevre I mean, with the kindest sentiment I can muster, fucking obviously?
English
0
0
1
350
John LeFevre
John LeFevre@JohnLeFevre·
Have you seen the kid on social media who interviews rich people on the street and asks them "what they do for a living?" A big PR guy told me it's all fake/scripted. Pitched/sold to rich wannabe influencers to reach a younger audience. All staged.
English
394
78
1.9K
1.1M
Riece Keck
Riece Keck@tech_headhunter·
@Rodairos Pivoting the agency recruiting product to outbound biz dev focused and it one shotted this dashboard redesign
Riece Keck tweet media
English
0
0
0
19
Riece Keck
Riece Keck@tech_headhunter·
Claude Design officially slaps
English
1
0
1
50
Josh Gonsalves
Josh Gonsalves@joshgonsalves_·
Oh, so Claude Design has it's own usage limit outside of everything else? And of course, already hit it... So now I can't use it until NEXT FRIDAY? OK...
Josh Gonsalves tweet media
Claude@claudeai

Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.

English
153
66
2.5K
295.3K
Dylan Lamb
Dylan Lamb@bydylanlamb·
@joshgonsalves_ just upgrade to the max plan...? why do people except the pro plan to last long lmao
English
4
0
0
818
Riece Keck
Riece Keck@tech_headhunter·
@LunarResearcher Of all the stuff that never happened, this never happened the most
English
0
0
22
1K
Lunar
Lunar@LunarResearcher·
An Anthropic engineer watched me trade from across the table at a WeWork in SF I had my laptop open. Four agents running. Green charts. Live trades scrolling. He was on a Zoom call. Muted himself. Walked over. "Are you running Claude against live prediction markets right now" I told him. Claude Code. Two repos. $25 a month. He pulled up a chair. "I helped build the model you're using. I've never seen anyone wire it to live trades like this" I showed him the dataset. github.com/warproxxx/poly… 86 million trades. Every wallet. Every entry. Every exit. He stared at it. "We tested this internally. You give Claude a dataset and don't tell it what to look for. It finds the winning wallets. Then it finds WHY they win. Then it copies the pattern. We never shipped it because legal killed it" I told him I did exactly that. One weekend. Claude Code found the exit logic on its own. Top wallets exit before resolution 91% of the time. They capture 86% of expected value. Cut losers at 12%. Everyone else captures 58% and holds to 41%. "That's the exact finding from our internal eval. Except ours took a team of eight and four months" I showed him the scanner. github.com/Polymarket/pol… Three commands. 500+ markets. No API key. Claude scores them all in 20 minutes. "You're using our model to beat markets we're not allowed to touch. On infra that costs less than my lunch" My setup: Claude API - $20/mo VPS - $5/mo poly_data - free polymarket-cli - free 214 trades. 74% win rate. +$9,400. 19 days. I showed him the full breakdown. Every repo. Every command. Every dollar. Copytrade here: @lunar" target="_blank" rel="nofollow noopener">kreo.app/@lunar He read it for five minutes. Then looked up. "If my manager sees this he's going to lose his mind. You just proved our model works in production and we've been sitting on it for a year" He DM'd me that night. "Take this down before someone at Anthropic finds it" Too late.
Lunar@LunarResearcher

x.com/i/article/2041…

English
29
22
302
222.5K
Riece Keck
Riece Keck@tech_headhunter·
So Opus 4.7 agents are now flagging their own work as malware and then refusing to make fixes because of it. Fantastic
Riece Keck tweet media
English
0
0
0
51