Balu0X
1.1K posts

Balu0X
@Balu0X
Building things. Breaking things. AI, code, random thoughts.
انضم Şubat 2026
169 يتبع111 المتابعون
تغريدة مثبتة

@TheAhmadOsman Absolutely, if we use it effectively at least I would take a break from X. There would be new inventions, new start-ups that could raise billions all of us would be living in a parallel world
English

OpenAI just launched the OpenAI Deployment Company a new majority owned subsidiary focused entirely on helping businesses actually deploy AI at scale.
- 150+ Forward Deployed Engineers
- $4 billion initial war chest from 19 big partners
This is them going from here’s the model to we’ll come sit with you and make it work in your company.
How it’s different:
While others sell APIs and hope you figure it out, OpenAI is now embedding their own engineers + specialists inside customer organizations, like a McKinsey + AI powerhouse hybrid.
This could genuinely revolutionize enterprise AI adoption. Most companies fail at deployment, not because of the model, but because of integration, change management, and legacy systems.
Curious to see if Anthropic, Google, or xAI follow with something similar.
What do you think, game changer or just another fancy consulting?
Greg Brockman@gdb
Introducing the OpenAI Deployment Company, which will help businesses maximally succeed with their deployments of AI. Starting with 150 Forward Deployed Engineers and Deployment Specialists, and $4 billion of initial investment from 19 partners.
English

Introducing the OpenAI Deployment Company, which will help businesses maximally succeed with their deployments of AI.
Starting with 150 Forward Deployed Engineers and Deployment Specialists, and $4 billion of initial investment from 19 partners.
OpenAI@OpenAI
Today we’re launching the OpenAI Deployment Company to help businesses build and deploy AI. It's majority-owned and controlled by OpenAI. It brings together 19 leading investment firms, consultancies, and system integrators to help organizations deploy frontier AI to production for business impact. openai.com/index/openai-l…
English

This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc.
More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage:
1) raw text (hard/effortful to read)
2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default
3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default
...4,5,6,...
n) interactive neural videos/simulations
Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status…
There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen.
TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212
English























