Visiting Fellow, Ph.D.

28.4K posts

Visiting Fellow, Ph.D. banner
Visiting Fellow, Ph.D.

Visiting Fellow, Ph.D.

@jackiefloyd

AI all the time and tacos.

Houston, TX Katılım Haziran 2008
3.3K Takip Edilen1.3K Takipçiler
Brad Groux
Brad Groux@BradGroux·
If you care about the future of OpenClaw, be sure to follow Dave - he's on the board with @steipete and they are working together to protect the future of the project. Check out his conversation about the Foundation with @jason and @TWiStartups - youtube.com/watch?v=KEHMoI…
YouTube video
YouTube
Dave Morin 🦞@davemorin

🦞 Been working with Peter Steinberger (@steipete) on the OpenClaw Foundation structure for weeks. A home for thinkers and hackers and those that want to own their data. Honored to serve as the founding independent board member. This community built something extraordinary, our job is to protect it. Open source forever. Excited to share more soon.

English
32
56
1.3K
649.2K
Visiting Fellow, Ph.D. retweetledi
Visiting Fellow, Ph.D.
Visiting Fellow, Ph.D.@jackiefloyd·
@mcuban I'm building my own personal AI stack. The traditional companies are too slow. I'm accelerating!!!
English
0
0
1
142
Mark Cuban
Mark Cuban@mcuban·
I’m going to tell you how much worse it was at the start of the PC Revolution for white collar workers trying to adapt, vs today with AI Today, presumably every white collar worker has access to a smart phone and/or a PC/laptop. Back then, a PC cost $4,995 , an off brand was $3,995. 5k in 1984 is about $16k today. It was really expensive. The only reason I could learn how to code and support software is because my job let me take home a PC to learn. By reading the software manual. Literally. RTFM. Or pay to go to training. Classes that started at hundreds of dollars then. It was expensive. It absolutely limited who could get ahead. Today, ANYONE can go to their browser, to the AI LLM website of their choice, and type in the words “I’m a novice with zero computer background, teach me how to create an agent that reads my email and …” That concept applies to LEARNING ANYTHING Think about what this means. Any employee of any company can say “ I need to learn how to xyz for my job , which is to do the following: Tell me what more information do you need to help me be more efficient, productive and promotable”. Or “ what new skills can you teach me that will help me reduce my chances of getting laid off “. Or “what suggestions do you have for me to communicate to my boss, who I barely know, to help my chances of staying employed “ These aren’t great prompts. But they are a start that anyone can take. Think about how incredible that is. Back in the day was so much harder for white collar workers. It was harder for new grads because unless they took comp sci, they probably had never used a PC. Big Companies are going to cut jobs. No question about it. Small companies is are going to need more and more AI literate thinkers who can help them compete or get an edge What I tell every entrepreneur, and it’s more crucial today. “ when you run with the elephants there are the quick and the dead. Adopt tech quickly , you can out maneuver big companies. “
Mark Cuban@mcuban

An article from the 90s explaining how in the 1980s, personal computers changed the dynamic of college vs high school workers. College grads learned how to use PCs and grew wages faster Mind you, this was when interest rates were 15pct, white collar unemployment was the highest it’s been any non covid year, general unemployment was 10pct, there was a recession, 18pct mortgages, and the start of the savings and loan industry collapse. The economy was a mess. Except it was the start of the “digital revolution “ which lead to change. Here we are at the early days of the AI revolution. I think it will be very analogous to what happened back then. If you think learning how to use Clause seems daunting, imagine being 50 yrs old in 1983, not knowing how to type, using a 1.0 key adding machine with a tape roll to do all your work as an analyst and realizing you had to figure out how your brand new IBM PC and lotus 1-2-3 worked. Or having only used a typewriter your entire career , then having to learn the new PC and WordStar. Trust me. WordStar key combinations were far harder to learn than telling Claude what you want done Lots of people couldn’t figure it out. Those who did were more productive Ctrl QA with AI nber.org/digest/sep97/h…

English
245
274
2.5K
536.6K
Wes Roth
Wes Roth@WesRoth·
Apple has quietly halted App Store updates for popular AI "vibe-coding" applications most notably the $9 billion startup Replit and mobile app builder Vibecode. After months of pushback, Apple is reportedly demanding major UX changes. Replit is being asked to force its generated app previews to open in an external web browser rather than natively inside its app. Vibecode was told it must completely remove the ability to generate software specifically for Apple devices.
Wes Roth tweet media
English
152
133
1.5K
1.1M
Visiting Fellow, Ph.D. retweetledi
Visiting Fellow, Ph.D. retweetledi
Alex Banks
Alex Banks@thealexbanks·
interactive visuals are wildly underrated in Claude. you can literally learn anything now. Ask Claude: “Show me Newton’s three laws of motion with interactive demonstrations.”
The Signal@thesignalAI_

Claude quietly shipped interactive visuals and most people haven't noticed. Charts, diagrams, and mini tools built right inside the chat. Free for everyone. Full breakdown with prompt templates in this week's issue: thesignal.substack.com/p/claudes-new-…

English
1
3
18
3.4K
Visiting Fellow, Ph.D. retweetledi
OpenArt
OpenArt@openart_ai·
Today, we’re launching a new way to create with AI. With OpenArt Worlds, you can generate a fully navigable 3D environment from a single prompt or image, step inside it, and capture shots exactly the way you envision them. No more starting over. No more inconsistent scenes. You build the world once - and create inside it. • Move through your scene freely • Find your angles • Add characters and elements • Capture production-ready shots
English
290
819
3.9K
7.2M
Visiting Fellow, Ph.D. retweetledi
Aritra 🤗
Aritra 🤗@ariG23498·
When you run a @PyTorch model on a GPU, the acutal work is executed through kernels. These are low-level, hardware-specific functions designed for GPUs (or other accelerators). If you profile a model, you'll see a sequence of kernel launches. Between these launches, the GPU can sit idle, waiting for the next operation. A key optimization goal is therefore to minimize gaps between kernel execution and keep the GPU fully utilized. One common approach is `torch.compile`, which fuses multiple operations into fewer kernels, reducing overhead and improving utilization. Another approach is to write custom kernels tailored to specific workfloads (e.g., optimized attention or fused ops). However, this comes with significant challenges: > requires deep expertise in kernels writing > installation hell > integration with the model is non-trivial To address this,@huggingface introduces the `kernels` library. With this one can: > build custom kernels (with the help of a template) > upload them to the Hub (like models or datasets) > integrate them to models with ease Let's take a look at how the transformers team use the kernels library to integrate it into the already existing models. (more in the thread)
English
19
88
1.2K
82.3K
Visiting Fellow, Ph.D. retweetledi
Romain Huet
Romain Huet@romainhuet·
We just brought our community work into one place, making it easier to find events and programs like Codex Ambassadors and Codex for OSS. There are 15+ community-led events coming up around the world, with more to come! Where would you like to see a Codex meetup next? 🌏
Romain Huet tweet media
English
44
21
182
42.8K
Visiting Fellow, Ph.D. retweetledi
Haider.
Haider.@slow_developer·
openai dropped gpt-5.4 mini and nano i think this is a great move, as gpt-5.4 is by far my favourite 5.x model so far moreover, gpt-5.4-mini is optimized for coding, computer use, multimodal, and sub-agents. more than twice as fast as gpt-5 mini. incredible
OpenAI@OpenAI

GPT-5.4 mini is available today in ChatGPT, Codex, and the API. Optimized for coding, computer use, multimodal understanding, and subagents. And it’s 2x faster than GPT-5 mini. openai.com/index/introduc…

English
10
12
79
6.5K
Visiting Fellow, Ph.D. retweetledi
SkalskiP
SkalskiP@skalskip92·
spent most of my day playing with GLM-OCR it's a 0.9B param vision-language model. supports 8K resolution, 8+ languages, and has built-in text, LaTeX, and table recognition modes. awesome! I tested it across different OCR tasks. starting with shipping container serial numbers.
SkalskiP tweet media
English
17
56
812
266K
Visiting Fellow, Ph.D. retweetledi
Morph
Morph@morphllm·
Introducing FlashCompact - the first specialized model for context compaction 33k tokens/sec 200k → 50k in ~1.5s Fast, high quality compaction
English
83
136
2.2K
209.6K
Visiting Fellow, Ph.D. retweetledi
Aaron Levie
Aaron Levie@levie·
The official Box CLI is here. Now you can use Box via Claude Code, Codex, Perplexity Computer, OpenClaw & more as a full cloud file system for agents. Available to all users, including free users with 10GB of free storage. npm install --global @box/cli
English
78
43
662
130.8K
Visiting Fellow, Ph.D. retweetledi
Daniel San
Daniel San@dani_avila7·
First tests of Cowork Dispatch feature Here I'm starting a session on my computer, asking it to check the latest PRs in my GitHub repo
English
5
5
61
55.8K