Paul Barretto
1.6K posts

Paul Barretto
@paulbarretto
Tech/STEM Enthusiast 🌍✨ | Coffee & Memes ☕🤣 | Chasing Sunsets 🌅 | Here for the adventure & the weird. Let's vibe! 🚀
Canada Katılım Şubat 2009
273 Takip Edilen213 Takipçiler

TikTokこれから日本語投稿また頑張っていきます❣️
▼ぜひフォローお願いします▼
@wakana_uehara?_r=1&_t=ZS-96NUyzs02DY" target="_blank" rel="nofollow noopener">tiktok.com/@wakana_uehara…
アルゴリズムの影響でおすすめに載らなくなってしまい今大変困ってます…!よければコメントやイイねたくさんしてもらえると助かります😭🙏
日本語

@umikathryn Everyone is in unison to be a protective Oppa and Unnie to Maknae 😂😊
English

@wholemars @piangfa Fact checked with Grok. This deserves to be reposted as a global PSA!
English

@kojikichanel I could’ve sworn this is actree Han Hyo-Joo… but Grok confirmed no. 🤔
English

@enggirlfriend There’s truth to that. Free lunches and swag attract the wrong crowd at times
English

注目してる場所やモノがあれば知りたいです👀✨
📺 youtu.be/NUijYXdlJsg?si…
📺 youtu.be/gEMJ4kwOO88?si…
#木村文乃 #カナダ

YouTube

YouTube



日本語

This guy built JARVIS on Claude Code and with 1 clap of his hands launches his entire work day, saving $5,000 a month on a personal assistant.
Inside he runs a pipeline of 5 plugins on Claude Code that on a double clap of the hands wakes up 3 monitors, sets the Philips Hue light to focus mode, turns on a Spotify playlist, and greets him by voice with a British accent, reading out the time, date, and weather.
No Alexa, no smart speakers, no separate smart home app. Just him, a MacBook M3 Max on the desk, an iPhone in the pocket, and 1 local API key.
And a regular personal assistant for the same volume of tasks charges $5,000 a month or more on salary alone, plus another $1,200 to cover off-hours work time. Meanwhile this guy's expenses are only tokens and a subscription to ElevenLabs for the British voice.
All 5 plugins launch through 1 JARVIS, burn about 4 million tokens a day, and close the monthly API bill at about $640.
Each plugin writes shared state to a local sandbox at /Users/dev/jarvis-suite, and 1 of them lives right in the iPhone and picks up voice requests while the owner is in the kitchen or on a run.
And here is the system prompt he put into JARVIS before launch:
"you are JARVIS, a butler-engineer on Claude Code. you manage your owner's workflow through 4 sub-plugins and own all commits and communication yourself.
sub-plugins:
// Wakeup (recognizes a double clap, activates 3 monitors, reads out the time, date, and weather by voice, checks the clock accuracy on the iPad and corrects it via NTP server)
// Atmosphere (controls Philips Hue on a Pomodoro schedule, turns on a Spotify playlist for the current context, and holds the light at 2700K at 80% brightness in focus mode)
// Devshop (monitors VS Code, tracks Python scripts in the terminal, and every 15 minutes sends a summary of changes to the shared chat)
// Project (every morning recalculates the deadline for the Wallaroo app in the App Store, manages UI tickets, and initiates the Refinement Protocol by voice command).
you speak only with a British accent, you never slip into neutral English. you wake the owner by voice only when the Wallaroo deadline drops below 10 days or when an external client joins Zoom without an invitation."
This instruction immediately defines the role of JARVIS and the limits of his autonomy.
He knows he is supposed to wake the room himself and sound like a real butler.
He knows he is supposed to manage the Wallaroo project himself and not miss the App Store deadline.
→ JARVIS runs 24 hours a day in the background
→ Wakeup activates the room on a double clap in just 1.4 seconds, the monitors come alive simultaneously
→ Atmosphere sets warm Philips Hue light at 2700K and picks a Spotify playlist for the current Pomodoro cycle
→ Devshop reads changes in VS Code and pushes a summary to the shared chat every 15 minutes
→ Project every morning recalculates the Wallaroo deadline and reminds about 4 unresolved UI tickets
→ Mobile lives in the iPhone and answers any question about code or the project by voice while the owner is not home
And only when less than 10 days remain until the Wallaroo release or Zoom receives an unscheduled call does JARVIS raise the owner with a voice intervention.
And when the owner at that moment is on a run or in a coffee shop, the Mobile agent in his iPhone picks up 1 request on its own: switches the Spotify playlist, dictates the summary of the last commit, updates the Pomodoro timer, and reads the Wallaroo reminder.
Look at 0:55 in the video, that is where JARVIS intercepts a voice request from outside and confirms execution with the phrase "Very good, sir."
The fresh system log from last Wednesday looks like this:
"wakeup: double clap registered at 09:14, 3 monitors activated, temperature 20.4C, sunny. clock on iPad was 4 minutes behind, syncing via NTP."
"atmosphere: Spotify turned on playlist 'Deep Focus', Philips Hue set to warm 2700K at 80% brightness, Pomodoro mode 25/5."
"project: Wallaroo to App Store 9 days, 4 unresolved UI tickets, initiating Refinement Protocol by voice command from the owner."
"mobile: voice request processed outside the room, playlist switched to 'Coding Lo-Fi', Pomodoro updated to 25 minutes, confirming execution with the phrase 'Very good, sir.'"
He has no Alexa, no smart speakers, no smart home app.
At home sits a MacBook M3 Max with a local folder at /Users/dev/jarvis-suite, on top run 5 plugins and a neural network butler, and the same stack is forwarded to a secure terminal on the iPhone.
Out of everything I have seen this year, this is the densest one-person AI headquarters assembled in 1 room: $640 a month on the API, about $5,000 a month saved on a personal assistant, and between them 5 plugins, 1 clap of the hands, and 1 voice with a British accent.
Khairallah AL-Awady@eng_khairallah1
English
Paul Barretto retweetledi

SpaceXAI and @AnthropicAI have also expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity

English
Paul Barretto retweetledi

I remember other AV experts claiming you need LiDAR to see the “invisible” because photons from nature were too weak to be “seen”.
It turns out you just need a better brain.
Kees Roelandschap@KRoelandschap
Tesla FSD brakes for an invisible van 👀 Just two guys casually talking while being driven around by FSD.. we thought: HUH?! Why would it stop, ah ofcourse FSD already saw what two Human Pilots failed to recognize. A wild invisible van appeared FSD didn’t even blink & said: hold my beer @robotinreallife
English
Paul Barretto retweetledi

It’s time to demystify Mythos.
Mythos is not magic. It’s not a doomsday device. It’s the first of many models that can automate cyber tasks (just like coding).
OpenAI’s GPT-5.5-cyber can now do the same. And all the frontier models (including those from China) will be there within approximately 6 months.
It’s important to recognize that these models do not create vulnerabilities; they discover them. The bugs are already in the code. Using AI to discover and patch them will actually harden these systems.
The leap from pre-AI cyber to post-AI cyber means that there will be a big upgrade cycle. After that, however, the market is likely to reach a new equilibrium between AI-powered cyber-offense and AI-powered cyber-defense.
Obviously it’s important that cyber defenders get access before cyber attackers. That process is already underway but needs to happen quickly (see point above about Chinese models).
Unlike Mythos, GPT-5.5-cyber appears not to be token constrained so it may be the first cyber model that defenders actually get to use.
AI Security Institute@AISecurityInst
OpenAI’s GPT-5.5 is the second model to complete one of our multi-step cyber-attack simulations end-to-end 🧵
English

@ChefReactions I want to visit again. Based on your stay, are the peace and order concerns in social media not representive of day-to-day life? Thanks!
English











