Alpár Kertész

728 posts

Alpár Kertész banner
Alpár Kertész

Alpár Kertész

@Criticality47

Clinical psychologist on AI, behavior, trust, and attention. Research-backed mental models for focus, judgment, and cleaner decisions.

Romania Katılım Aralık 2013
855 Takip Edilen227 Takipçiler
Sabitlenmiş Tweet
Alpár Kertész
Alpár Kertész@Criticality47·
AI won’t just change productivity. It’s changing attention, emotion, self-trust, and how people relate to their own minds. I’m a psychologist writing about AI, focus, digital overwhelm, and practical ways to stay mentally sharp. If you want signal over hype, follow along.
English
0
0
1
114
Alpár Kertész
Alpár Kertész@Criticality47·
@stevibe Lucky you! Have a blast with it! I want one two but where I am it costs like 6x minimum salary..so that could take a while to gather
English
0
0
1
10
Alpár Kertész
Alpár Kertész@Criticality47·
@chatgpt21 "leaked", did you notice that data "leak" is the new form of marketing?
English
0
0
3
222
Chris
Chris@chatgpt21·
🚨 MASSIVE ANTHROPIC DATA LEAK REVEALS THEIR NEXT-GEN MODEL 🚨 A massive security lapse left an unsecured data trove sitting completely open. Based on the actual Fortune report, here is what the leaked documents revealed: • The Most Capable Model Yet: The documents exposed upcoming product announcements for an unreleased AI that Anthropic internally calls the most capable model they have ever trained. • A Literal "Step Change": After being caught, Anthropic officially confirmed they are testing this new model with early-access customers and stated it represents a massive "step change" in baseline AI capabilities. • The Benchmarks: The new architecture is showing significantly higher performance specifically in complex reasoning, coding, and cybersecurity compared to their prior frontier models. • Corporate Operations Leaked: Beyond just the AI tech, the unsecured data store also exposed highly sensitive internal company operations, including the details of an upcoming, invite-only CEO retreat.
Chris tweet media
English
13
8
152
15.4K
VraserX e/acc
VraserX e/acc@VraserX·
OpenAI’s upcoming model “Spud” is rumored to be natively agentic like GPT-5.4… but on a completely different level. A true superhuman computer-use agent. This doesn’t just improve white-collar work. It replaces most of it.
VraserX e/acc tweet media
English
51
25
458
31.7K
Alpár Kertész
Alpár Kertész@Criticality47·
@kimmonismus Everything has its time :) First the intelligence then the fun parts. That's what people don't want to understand. And yes children's safety first, always.
English
1
0
0
302
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI has indefinitely shelved its planned "adult mode" erotic chatbot amid pushback from staff and investors over risks to minors and concerns about encouraging unhealthy emotional attachments to AI. The decision is part of a broader refocusing away from "side quests" toward core productivity tools, with the company also winding down Sora and its social app. Technical challenges in training safety-aligned models to produce explicit content while filtering illegal material added further complications to the project.
Chubby♨️ tweet media
Financial Times@FT

OpenAI puts erotic chatbot plans on hold ‘indefinitely’ ft.trib.al/4Q2hLpT

English
171
61
683
137.1K
Chris
Chris@chatgpt21·
> Jensen Huang every year: AGI will create more jobs > also Jensen Huang: here is a car that can drive itself so you no longer need to hire the person whose job was driving the car > wow > incredible > revolutionary > so just to make sure I understand > when the car starts doing the driving > we will somehow need more drivers > right??? > right??? > “self checkouts will create more cashiers!!” > “car wash creates more rain” > “ATMs created more bank tellers!!”
>”alarm clocks created more roosters!!” bro looked at a machine replacing the literal human in the seat and said
this is actually very good news for employment!
Chris tweet media
English
23
5
70
4.2K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
i should probably make a prediction. anthropic will be the first lab to achieve agi/asi it’s fairly obvious that research and talent are the moat. now obviously you don’t get a seat at the poker table without a few gw’s and a private line with mr jensen. but meta and microsoft are proof that those things alone don’t count for shit. so ok fine, we’re in the era of research. so let’s look at who’s at the party rn. xai: still kinda stuck in the chatbot era, don’t feel as strong on agency and coding. huge re shuffle is a risk. could pay off. let’s see. google: the code red kinda worked, but not really. again model lacks agency, smart? yes. useful? i’m yet to see it. so who out of openai and claude seem to have the best research taste and shipping velocity? well, in the last eight months anthropic have been far in front. first to see how importing coding was, skills, computer use, mcps, claude code, co work. i could go on. they’ve even built clawdbot before the company that bought it…like, cmon sam. i’m an openai stan in truth. but. this is clear. and i wonder if it’s all powered by a) vastly stronger models b) vastly better research taste c) dario’s vision and focus big year i’d say.
English
76
17
360
23.6K
Alpár Kertész
Alpár Kertész@Criticality47·
@iruletheworldmo After agi is done, it can figure out the rest too, so it can bring back Sora and make better video gens. Priorities.
English
0
0
1
148
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
look, it's rough, but, it's the right call asi is but a few weeks away and the spud is compute hungry and groundbreaking.
🍓🍓🍓 tweet media
English
18
10
240
10.7K
Chubby♨️
Chubby♨️@kimmonismus·
Either OpenAI officially achieved AGI or this is the biggest troll move ever: - they rename product organization to "AGI Deployment" - Altman says the next LLM is a "very strong model" - it very much accelerate the economy Quote: "Altman also said that the company would be renaming senior executive Fidji Simo’s product organization to “AGI Deployment,” a reference to artificial general intelligence, or AI that’s roughly on par with humans." However, Altman says "Spud is very strong model" in “a few weeks” that the team believes “can really accelerate the economy.”
Chubby♨️ tweet media
Chubby♨️@kimmonismus

OpenAI finished the initial developement of its next major LLM: codenamed Spud (GPT-5.5 / 6.0) Sam Altman however is "raising capital, supply chains and “building datacenters at unprecedented scale,”

English
111
86
1.2K
223.7K
Alpár Kertész
Alpár Kertész@Criticality47·
They cancelled Sora, I dont think they will do months of post-training, alignment, and red-teaming, they more likely will dedicate all Sora resource to releasing Spud as fast as possible. Don't forget that this is a race, and if they weren't very close to the finish line, they wouldn't rush that much to get there.
English
1
0
27
4K
Wes Roth
Wes Roth@WesRoth·
According to a new report from The Information, OpenAI has reached a massive milestone. The initial development and training phase of its next frontier model, internally codenamed "Spud" (anticipated to be GPT-5.5 or GPT-6), is complete. The model will now likely enter the intensive, months-long phases of post-training, alignment, and red-teaming before any public release. With the software architecture for Spud largely locked in, Sam Altman is reportedly stepping back from day-to-day product management and model development. Altman is now dedicating his focus entirely to macro-level physical constraints. His primary responsibilities have shifted to raising astronomical amounts of capital, securing semiconductor supply chains, and "building datacenters at unprecedented scale."
Wes Roth tweet media
English
50
49
651
63.2K
Chubby♨️
Chubby♨️@kimmonismus·
OpenAI's Sora team is now working on world-models - they prioritize longer-term world simulation research especially as it pertains to robotics. tl;dr what we know so far: - Sora has been cancelled because they needed the compute for their new LLM - they renamed product organization to "AGI Deployment" - the LLM (codename Spud) is "very very strong" and "accelerates the economy" - release in a few weeks - Sam is going to focus on "raising capital, supply chains and “building datacenters at unprecedented scale” my take: To me, it really sounds like they are preparing for the IPO and will make AGI official beforehand.
Chubby♨️ tweet media
Chubby♨️@kimmonismus

Either OpenAI officially achieved AGI or this is the biggest troll move ever: - they rename product organization to "AGI Deployment" - Altman says the next LLM is a "very strong model" - it very much accelerate the economy Quote: "Altman also said that the company would be renaming senior executive Fidji Simo’s product organization to “AGI Deployment,” a reference to artificial general intelligence, or AI that’s roughly on par with humans." However, Altman says "Spud is very strong model" in “a few weeks” that the team believes “can really accelerate the economy.”

English
51
74
748
103.1K
Alpár Kertész
Alpár Kertész@Criticality47·
@soraofficialapp EU never got to use it... so... maybe it will be integrated into the ChatGPT app? If there are plans for one mega app... then I can see the reason. Or maybe something better? :D
English
0
0
0
33
Sora
Sora@soraofficialapp·
We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
English
11.4K
5.9K
37K
47.8M
Alpár Kertész
Alpár Kertész@Criticality47·
@mark_k Well Sora app never appeared in EU so I can't say much... but there were rumors that they are working in integrating it with the ChatGPT app. or who knows... maybe something better is coming :)
English
0
0
0
29
Mark Kretschmann
Mark Kretschmann@mark_k·
OpenAI is officially killing Sora. It's over for the social video platform. Are you going to miss Sora? 🥲
English
97
13
221
14.2K
Alpár Kertész
Alpár Kertész@Criticality47·
@sama yup, only model that could come up with functional novel ideas and formulas, i love it😁
English
0
0
0
88
Alpár Kertész
Alpár Kertész@Criticality47·
The goal isn't anti-AI purity. It's using AI to support thinking instead of replacing the exact friction that builds it.
English
0
0
0
4
Alpár Kertész
Alpár Kertész@Criticality47·
The first thing AI erodes usually isn't intelligence. It's frustration tolerance. Once a tool rescues the hard part on demand, fewer people stay with confusion long enough to build judgment.
English
1
0
0
15
Alpár Kertész
Alpár Kertész@Criticality47·
@reidhoffman It helps until nobody knows who can override what. A lot of coordination tax is really ownership tax wearing meeting invites.
English
0
0
0
18
Reid Hoffman
Reid Hoffman@reidhoffman·
Not enough companies are using AI to dissolve the coordination tax. As you add people and increase scope, the tax on aligning them grows superlinearly. AI can increase throughput without adding layers of humans whose core job is alignment work.
English
93
23
336
45.7K
Alpár Kertész
Alpár Kertész@Criticality47·
@RwandaICT @estherkunda This is the right framing. AI literacy matters, but the real maturity test is whether public servants know when to slow a system down, ask for evidence, and document why a decision was trusted.
English
0
0
0
184
Ministry of ICT and Innovation | Rwanda
At the AI Trust & Safety Workshop, DG @estherkunda noted the significant strides Rwanda has made in advancing AI literacy among public servants and the continued efforts to deepen that foundation. She emphasised that responsible AI adoption goes beyond technology. It requires asking the right questions around bias, transparency, and what safe AI deployment means for every function of an organisation.
Ministry of ICT and Innovation | Rwanda tweet mediaMinistry of ICT and Innovation | Rwanda tweet media
English
1
16
97
6.5K
Alpár Kertész
Alpár Kertész@Criticality47·
@adcock_brett The raw dexterity is impressive, but the trust threshold is different from the wow threshold. Once these leave the demo floor, people need to know how they fail, not just how smooth they look when everything goes right.
English
0
0
0
644
Alpár Kertész
Alpár Kertész@Criticality47·
Capability raises adoption. Inspectability keeps it. Teams forgive rough edges much faster than they forgive unclear ownership.
English
0
0
0
4
Alpár Kertész
Alpár Kertész@Criticality47·
Most AI trust breaks are not model failures. They're relationship failures. People can tolerate "this tool is limited." What they don't tolerate is false intimacy, hidden handoffs, and nobody owning the output. If AI is speaking for you, say so. If AI touched the decision, make the chain visible. Trust survives limits. It does not survive covert delegation.
English
2
0
0
21
Alpár Kertész
Alpár Kertész@Criticality47·
That's why fake-personal outreach feels so gross. The irritation is not about automation. It's about discovering the social contract got quietly swapped.
English
0
0
0
6