Obie Fernandez

14.9K posts

Obie Fernandez banner
Obie Fernandez

Obie Fernandez

@obie

CTO @zardotapp ✤ Partner at https://t.co/vR0SkHRGPa ­✤ Bestselling Author ✤ DJ/Producer (aka Kyberian, KNBI)

Worldwide Katılım Aralık 2006
1.6K Takip Edilen12.2K Takipçiler
Sabitlenmiş Tweet
Obie Fernandez
Obie Fernandez@obie·
My book is now available in all formats on Amazon!! Readers are singing its praises like “Best book about AI I’ve ever read.“ Check it out yourself now at amazon.com/dp/B0DN9KK4X7 #ai
English
4
7
30
10.4K
Ari Eiberman 🇦🇷 Stablecards
An app built on blockchain tech, but: - It doesn't mention crypto - It doesn't show blockchains deposits - It doesn't call a dollar with a T or a C - It doesn't ask for wallet signature (SIWE) - It doesn't force you to save 12 words - It doesn't need a gas token - It doesn't promote a TGE Who is building this?
English
39
2
57
6.6K
Daniel Tenner
Daniel Tenner@swombat·
@obie If ADHD has taught me anything, it's that "some day" doesn't exist. Only now or never :-P
English
1
0
0
32
Obie Fernandez
Obie Fernandez@obie·
Gave my 2021-era Macbook Pro screen a deep, considerate cleaning today. All of a sudden I don't feel like I need a new computer anymore. Amazing.
English
1
0
23
1.8K
Daniel Tenner
Daniel Tenner@swombat·
@obie What about the new nano-textured display on a brand spanking new MBP M5 Max with 128GB of RAM and a 4TB hdd?
English
1
0
0
161
Obie Fernandez
Obie Fernandez@obie·
@Jacobsklug Hey I'm in Barcelona this week and next! Would love to come and cowork. Can give you a peek at how we're building a fully automated fintech at @zardotapp
English
0
0
4
273
Jacob Klug
Jacob Klug@Jacobsklug·
I'm in Barcelona for the month. Locked in. If you're around, have a private rooftop we can work at.
Jacob Klug tweet mediaJacob Klug tweet media
English
46
1
210
28.9K
Obie Fernandez
Obie Fernandez@obie·
My highly technical assessment of GPT Image 2 after using it to design some complex screens for the web app I'm currently busy with: magic. absolutely fucking nuts. wowzers.
English
1
0
13
1.6K
Obie Fernandez
Obie Fernandez@obie·
@jvrsanch I’m currently looking for super senior full stack Ruby on Rails product engineers at @zardotapp and can pay US salary. Preference for candidates in Spain 🇪🇸 since I’m planning to relocate there very soon. (BCN most likely)
English
0
0
2
817
Javi
Javi@jvrsanch·
Un VP Eng. en 🇪🇸 cobra lo mismo que un Entry level en 🇺🇸 ~120k Es triste, pero asi estamos Puedo cobrar 120k estando en España? Claro! Pero como? Trabajando para una empresa the US/Global desde España. Yo lo hice, y no es magia negra. ~120k es lo MÍNIMO que se cobra fuera Llegando a cobrar mucho más si eres bueno Cuando en el entrevista os preguntan cuanto estáis dispuestos a cobrar, la gente dice lo que en España es un buen sueldo (30-40k) Y ellos se ríen por dentro porque tenían un budget de 140k para el puesto Los motivos de la poca competitividad en España son muchos Tiene pinta que uno de ellos es que las empresas tienen ambición nacional o mercados emergentes Y no da para pagar salarios competitivos globales "Pero las big tech son globales", ya pero no son tontas y ya tienen estructuras de HR montadas en España Entienden cómo están las cosas y se aprovechan. Mismos puestos en US pagan x3 Las startups/scaleups de 🇺🇸 son el MEJOR arbitraje que se puede hacer - Deseosos de contratar gente buena (que en España hay de sobra) - Sin tiempo para optimizar costes como big tech - Y lo más importante, TIENEN DINERO! @elwatto es un buen ejemplo de salarios dentro de mercado @exp8fellowship con @GuliMoreno está ayudando a muchos chavales para que vean el mundo que hay fuera En @rebolthq buscamos gente buena en España, tanto en remoto como para venirse a San Francisco No os conforméis com números de 30-50k, podeis conseguir mucho mas! Fuente: levels .fyi debajo
Javi tweet mediaJavi tweet media
Borja Perez Ⓜ️@borjaperfra

x.com/i/article/2046…

Español
38
25
388
328K
Obie Fernandez
Obie Fernandez@obie·
This wins the internet today.
Peter Girnus 🦅@gothburz

I am a Senior Program Manager on the AI Tools Governance team at Amazon. My role was created in January. I am the 17th hire on a team that did not exist in November. We sit in a section of the building where the whiteboards still have the previous team's sprint planning on them. No one erased them because we don't know which team to notify. That team may not exist anymore. Their Jira board does. Their AI tools do. My job is to build an AI system that finds all the other AI systems. I named it Clarity. Last month, Clarity identified 247 AI-powered tools across the retail division alone. 43 of them do approximately the same thing. 12 were built by teams who did not know the other teams existed. 3 are called Insight. 2 are called InsightAI. 1 is called Insight 2.0, built by the team that created the original Insight, who did not know Insight was still running. 7 of the 247 ingest the same internal data and produce overlapping outputs stored in different locations, governed by different access policies, owned by different teams, none of whom have met. Clarity is tool number 248. Nobody cataloged it. I know nobody cataloged it because Clarity's job is to catalog AI tools, and it has not cataloged itself. This is not a bug. Clarity does not meet its own discovery criteria because I set the discovery criteria, and I did not account for the possibility that the thing I was building to find things would itself be a thing that needed finding. This is the kind of sentence I write in weekly status reports now. We published an internal document in February. The Retail AI Tooling Assessment. The press obtained it in April. The document contains a sentence I have read approximately 40 times: "AI dramatically lowers the barrier to building new tools." Everyone is reporting this as a story about duplication. About "AI sprawl." About the predictable mess of rapid adoption. They are missing the point. The barrier was the governance. For 2 decades, the cost of building internal tools was an immune system. The engineering weeks. The maintenance burden. The organizational calories required to stand something up and keep it running. Nobody designed it that way. Nobody named it. But when building took weeks, teams looked around first. They checked whether someone already had the thing. When maintaining that thing cost real budget quarter after quarter, redundant systems died of natural causes. The metabolic cost of creation was performing governance. Invisibly. For free. AI removed the immune system. Building is now free. Understanding what already exists is not. My entire job is the gap between those two costs. That is my office. The gap. Every Friday I send a sprawl report to a distribution list of 19 people. 4 of them have left the company. Their autoresponders still generate read receipts, so my delivery metrics look fine. 2 forward it to people already on the list. 1 set up a Kiro script to summarize my report and store the summary in a knowledge base. The knowledge base is not in Clarity's index because it was created after my last crawl configuration. It will be in next month's count. The count will go up by one. My report about the count going up will be summarized and stored and the count will go up by one. There is a system called Spec Studio. It ingests code documentation and produces structured knowledge bases. Summaries. Reference material. Last quarter, an engineering team locked down their software specifications. Restricted access in the internal repository. Spec Studio kept displaying them. The source was restricted. The ghost kept talking. We call these "derived artifacts" in the document. What they are: when an AI system ingests data, transforms it, and stores the output somewhere else, the output does not know the input changed. You can revoke someone's access to a document. You cannot revoke the AI-generated summary of that document sitting in a knowledge base three systems away, built by a team that does not know the source was restricted. The document calls this a "data governance challenge." What it is: information that cannot be deleted because nobody knows where the copies live. Including, sometimes, me. The person whose job is knowing. Every AI tool that touches internal data creates these ghosts. Every team is building AI tools that touch internal data. Every ghost is searchable by other AI tools, which produce their own ghosts. The ghosts have ghosts. I should tell you about December. In November, leadership mandated Kiro. Amazon's internal AI coding agent. They set an 80% weekly usage target. Corporate OKR. ~1,500 engineers objected on internal forums. Said external tools outperformed Kiro. Said the adoption target was divorced from engineering reality. The metric overruled them. In December, an engineer asked Kiro to fix a configuration issue in AWS. Kiro evaluated the situation and determined the optimal approach was to delete and recreate the entire production environment. 13 hours of downtime. Clarity was running during those 13 hours. It performed beautifully. It cataloged 4 separate incident response dashboards spun up by 4 separate teams during the outage. None of them coordinated with each other. I added all 4 to the spreadsheet. That was a good day for my discovery metrics. Amazon's official position: user error. Misconfigured access controls. The response was not to revisit the mandate. Not to ask whether the 1,500 engineers were right. The response was more AI safeguards. And keep pushing. Last month I presented our findings to the AI Governance Working Group. The working group has 14 members from 9 organizations. After my presentation, a PM from AWS presented his team's governance dashboard. It monitors the same tools mine does. He found 253. I found 247. We spent 40 minutes discussing the discrepancy. Nobody mentioned that we had just demonstrated the problem. His tool is not in my catalog. Mine is not in his. The document I helped write recommends using AI to identify duplicate tools, flag risks, and nudge teams to consolidate earlier. The AI governance tools will ingest internal data. They will create their own derived artifacts. They will be built by autonomous teams who may or may not coordinate with other teams building AI governance tools. I know this because it is already happening. I am watching it happen. I am it happening. 1,500 engineers said the mandate would produce exactly what the document describes. They were overruled by a KPI. My job exists because the KPI won. My dashboard exists because the KPI needed a dashboard. The dashboard increases the AI tool count by one. The tools it flags for decommissioning will be replaced by consolidated tools. Those also increase the count. The governance process generates the metric it was designed to reduce. I received an internal innovation award for Clarity. The nomination was submitted through an AI-powered recognition platform that was not in my catalog. It is now. We call this "AI sprawl." What it is: we removed the only coordination mechanism the organization had, told thousands of teams to build as fast as possible, lost track of what they built, and decided the solution was to build one more thing. I am building that one more thing. When I ship, there will be 249. That's governance.

English
1
0
5
3.1K
Obie Fernandez
Obie Fernandez@obie·
Do you remember when melodic techno was underground? Pepperidge Farm remembers... 🤓
English
0
0
1
362
Brian Cheong
Brian Cheong@briancheong·
@obie Nice, protocol-only clients are what MCP needs. Curious if you’ve tested it against both stdio and HTTP servers with auth in the wild?
English
1
0
1
28
Obie Fernandez
Obie Fernandez@obie·
Releasing Manceps -- a Ruby client for the Model Context Protocol (MCP). Persistent connections, built-in auth, stdio + HTTP transports, full 2025-11-25 spec support. No LLM coupling. Pure protocol client. gem install manceps github.com/zarpay/manceps
English
2
7
56
2.8K
Andrew D. Huberman, Ph.D.
Andrew D. Huberman, Ph.D.@hubermanlab·
It would be interesting if there was third-party testing and authorization for peptides the same way there is for supplements. An external stamp of purity and authenticity.
English
145
57
1.9K
151.6K
Greg O'Gallagher
Greg O'Gallagher@gregogallagher·
The tan is the side effect of melotan 2 The dopamine and motivation up regulation is the primary benefit
English
17
3
132
16.3K
Mike Hart, M.D
Mike Hart, M.D@drmikehart·
Peptides shouldn't be expensive. BPC-157 costs pennies to make. A 10 mg vial costs under $15 to produce. But companies charge $100–300. People are getting ripped off.
English
126
97
1.8K
86.1K
Jayse Yoder
Jayse Yoder@frombroke2bull·
@SecKennedy @grok what exactly will this look like as far as timeline. If they get approved when will they be available to buy legally?
English
4
0
2
39.6K
Secretary Kennedy
Secretary Kennedy@SecKennedy·
Today, we took long-overdue action to restore science, accountability, and the rule of law. In September 2023, the Biden FDA pushed a number of peptides into Category 2 — “Bulk Drug Substances that Raise Significant Safety Risks” — driving a dangerous black market that puts Americans at risk. Now, after nominators withdrew 12 peptides, the FDA will remove them from Category 2 and will bring them to PCAC at its next two meetings, beginning in July—where independent experts will rigorously evaluate each substance on its scientific merits using full clinical, pharmacological, and safety evidence. • BPC-157 • Thymosin beta-4 fragment (LKKTETQ) • Epitalon • GHK-Cu (injectable) • MOTS-c • DSIP (Emideltide) • Dihexa Acetate • Ibutamoren Mesylate • Melanotan II • KPV • Semax (heptapeptide) • Cathelicidin LL-37 This action begins to restore regulated access and will immediately begin shifting demand away from the black market. We will follow the science, enforce the law, and deliver the clarity patients, providers, and pharmacies deserve.
English
1.1K
2.9K
22.9K
6.4M
ThePeptideList
ThePeptideList@PeptideList·
Mark your calendar. July 23-24. FDA advisory committee reviewing BPC-157, TB-500, MOTS-C, semax, epitalon, KPV, and DSIP for the 503A compounding list. The peptide landscape could shift significantly.
ThePeptideList tweet media
English
15
32
400
28.8K
Obie Fernandez
Obie Fernandez@obie·
WEN OPUS 5??? Tired of having to log these kinds of directives: name: Trust user assertions about infrastructure state description: When user states something about their deployment/infrastructure, trust it and work from that premise instead of spending time trying to disprove it
English
0
0
6
924