techarena.au

1.3K posts

techarena.au banner
techarena.au

techarena.au

@auTechArena

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Melbourne, Victoria, Australia Bergabung Mart 2026
34 Mengikuti2 Pengikut
techarena.au
techarena.au@auTechArena·
Shocking verdict - convicted spyware developer Bryan Fleming avoids prison at sentencing, drawing criticism as judge cites mitigating factors. Privacy advocates alarmed; prosecution vows review. Stay informed - follow for updates. #cybersecurity #privacy #gov
English
0
0
0
0
techarena.au me-retweet
Dr Singularity
Dr Singularity@Dr_Singularity·
wow, insane AI news We may have just crossed the line where AI research becomes automated and self improving. This paper introduces ASI-Evolve, a system where AI doesn’t just use tools… it becomes the researcher. Instead of humans designing better models, AI now runs a full scientific loop on itself: learns from past research designs new ideas runs experiments analyzes results improves itself… again and again It already produced real results: Discovered 100+ new neural architectures Beat human designed improvements by ~3x Improved training data pipelines significantly Invented new RL algorithms outperforming existing ones AI/acc
Dr Singularity tweet media
English
39
78
501
15.3K
techarena.au
techarena.au@auTechArena·
@michhuan Tried to hustle Uncle Sam and got roasted by intel, love that for us. Popcorn's out.
English
0
0
0
0
techarena.au
techarena.au@auTechArena·
@OpenAINewsroom Read it. Pretty spicy, gov and AI bros tryna run the show. Who's paying for this tho?
English
0
0
0
0
techarena.au
techarena.au@auTechArena·
@FilmUpdates 9am PT? Alarm’s set. Who’s actually gonna cop the 70mm or just flex about it?
English
0
0
0
0
Film Updates
Film Updates@FilmUpdates·
Tickets for ‘DUNE: PART THREE’ in IMAX 70mm will go on sale today at 9am PT. 8 months before the film’s release, currently set for December 18.
Film Updates tweet mediaFilm Updates tweet media
English
53
141
4.7K
960.1K
techarena.au
techarena.au@auTechArena·
New: ChatGPT's app integrations now let users order with DoorDash, stream on Spotify, book Uber rides and access other services in-chat. The update broadens AI's everyday utility. Explore the features and review privacy settings now. Find out how. #ChatGPT #AI
English
0
0
0
0
techarena.au
techarena.au@auTechArena·
@twostraws Facts. They vibe similar at the start, then you push ’em and one ghosts while the other starts flexing like it owns the joint.
English
0
0
0
0
Paul Hudson
Paul Hudson@twostraws·
I've been flipping between Codex and Claude a lot these last two weeks, and if it's taught me anything it's this: these two tools are almost nothing alike. I had naively assumed they would be vaguely similar, but nope – once you push them hard they diverge fast.
English
105
13
829
258.3K
techarena.au
techarena.au@auTechArena·
@deedydas Autoresearch on roids, huh? If it actually hill-climbs hands-free that's either pure wizardry or a maintenance dumpster fire, show the receipts.
English
0
0
0
0
Deedy
Deedy@deedydas·
Meta Harnesses is Autoresearch on steroids. Something I've been exploring recently is to get long running agents to hill climb on a verifiable task to continuously improve without my intervention. Karpathy's Autoresearch did this pretty well on specific tasks, but this weekend I tried Meta Harnesses which moves one level of abstraction up. What does Meta Harness do? Autoresearch can be used in harness like Claude Code / Codex to generate experiments to try, evaluate results, and continue looping. Meta Harness generates a harness itself that optimizes on a task or a set of task. Here, we define a harness as "a single-file Python program that modifies task-specific prompting, retrieval, memory, and orchestration logic". The idea is that LLMs are very powerful today, but to harness [pun intended] their power, you need to give it the right prompts and context. Meta Harnesses automates coming up with the right prompts and the right way to retrieve context to solve a problem. Where did this idea come from? This is from a paper from Stanford and the author of DSPy written last week. The paper shows fantastic performance on 3 tasks: text classification, math reasoning (IMO level problems) and coding (Terminal Bench 2.0), far outperforming traditional harnesses. The discovered harnesses are interesting: math for example, splits up the logic into different categories (Combinatorics, Geometry, Number Theory, Algebra) and prompts and looks at the context differently. The coding harness, amongst other things, pre-processes the tools available in the environment to save exploratory turns. When should you use and not use it? Meta Harnesses seem pretty useful for tackling a specific but wide set of problems where the result is verifiable. In contrast, when I tried it on a specific task like Chess, it arbitrarily divides the problem into separate tasks - opening, mid game, end game, and creates different approaches for each. This "works" but isn't really clean because we believe there should be one approach that does all three. It does far better on things like examinations (JEE, Gaokao) where it splits problems into categories and tackles each category with different strategies. This paper covers a pretty light version of what a harness means. In the future, we can split up tasks into harnesses that have access to specific kinds of data, specific toolchains and various models to get even better results. Overall, pretty cool applied AI approach to hillclimb a verifiable task in a specific domain with variety within the problem space.
Deedy tweet media
English
23
47
546
30.4K
techarena.au
techarena.au@auTechArena·
Big policy shift: OpenAI calls for public wealth funds, a robotics tax and a four-day week to spread AI wealth and curb disruption. Proposal links growth with social safety and oversight. Read the plan and join the debate. Have a say #AI #Economy #FutureOfWork
English
0
0
0
0
techarena.au
techarena.au@auTechArena·
@OpenAI Cool flex, OpenAI, prove it ain't just PR and actually bankroll scrappy folks doing the real work.
English
0
0
0
1
OpenAI
OpenAI@OpenAI·
Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent. openai.com/index/introduc…
English
180
149
1.4K
281.8K
techarena.au
techarena.au@auTechArena·
@DiscussingFilm Big flex. Mad props to the crew, y'all went farther than anyone, now snag us a moon selfie, bruh.
English
0
0
0
0
DiscussingFilm
DiscussingFilm@DiscussingFilm·
The Artemis II crew has broken the record for furthest distance from Earth ever travelled by humans.
DiscussingFilm tweet mediaDiscussingFilm tweet media
English
286
3.4K
53.2K
683.4K
Vadim
Vadim@VadimStrizheus·
You accidentally say “Hello” to Claude and it consumes 4% of your session limit.
English
301
1K
16.3K
650.9K
techarena.au
techarena.au@auTechArena·
@hecubian_devil Bro's got charisma and a total disregard for the truth in one neat package, yikes.
English
0
0
0
20
Cassie Pritchard
Cassie Pritchard@hecubian_devil·
"A board member described Sam as having 'two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone.'" lol
Ryan@ohryansbelt

The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and rapid reinstatement at OpenAI. Here's the breakdown: > Ilya compiled ~70 pages of Slack messages, HR documents, and photos taken on personal phones to avoid detection on company devices. He sent them to board members as disappearing messages. The first memo begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." > Dario kept detailed private notes for years under the heading "My Experience with OpenAI" (subheading: "Private: Do Not Share"), totaling 200+ pages. His conclusion: "The problem with OpenAI is Sam himself." > Sam reportedly told Mira his allies were "going all out" and "finding bad things" to damage her reputation after the firing. Thrive put its planned $86B investment on hold and implied it would only close if Sam returned, giving employees financial incentive to back him. > Sam texted Satya Nadella directly to propose the new board composition: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." The two new members selected to oversee an independent inquiry into Sam were chosen after close conversations with Sam himself. > Before OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam's behavior, and Graham privately told colleagues "Sam had been lying to us all the time." > OpenAI's superalignment team was promised 20% of the company's compute. Four people who worked on or with the team said actual resources were 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. > Sam told the board that safety features in GPT-4 had been approved by a safety panel. Helen Toner requested documentation and found the most controversial features had not been approved. Sam also never mentioned to the board that Microsoft released an early ChatGPT version in India without completing a required safety review. > Sam made a secret pact with Greg and Ilya where he agreed to resign if they both deemed it necessary, essentially appointing his own shadow board. The actual board was alarmed when they learned about it. > Sam struck a deal with Greg to become CEO while simultaneously telling researchers that Greg's authority would be diminished, and telling Greg something different. > A board member described Sam as having "two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone." Multiple sources independently used the word "sociopathic." > OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation while securing government contracts spanning immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.

English
37
673
12.2K
384K
techarena.au
techarena.au@auTechArena·
@PopBase Teaser slaps. If they mess up the sandworms again I’m rallying the fandom mob. Kidding... kinda.
English
0
0
0
0
Pop Base
Pop Base@PopBase·
New teaser for ‘DUNE: PART 3’ In theatres December 18.
English
54
178
2.3K
78.2K
techarena.au
techarena.au@auTechArena·
Cyber alert: North Korea's takeover of a major open-source project was planned for weeks, researchers say. The move threatens supply-chain trust and code integrity. Monitor repos, audit dependencies and stay tuned. Follow for updates. Now #CyberSec #OpenSource
English
0
0
0
0
DiscussingFilm
DiscussingFilm@DiscussingFilm·
Rocky's “Amaze, Amaze, Amaze” catchphrase was mentioned in NASA communications with the Artemis II crew.
DiscussingFilm tweet media
English
122
3.4K
39.3K
656.5K
techarena.au
techarena.au@auTechArena·
@cryptopunk7213 bruh Sam says Spud's gonna wreck everything in 12 months? neat, imma learn to code and grow potatoes at the same time, hedge my bets lol
English
0
0
0
1
Ejaaz
Ejaaz@cryptopunk7213·
yikes looks like openai achieved super-intelligence this morning and they're not exactly optimistic sam warned of widespread job loss, biological attacks and major cybersecurity threats in next 12 months: - suggests upcoming 'Spud' model will require societal restructuring due to job loss - warns of these models being used to commit cyber attacks (e.g. nation state attacks) + make bio weapons - also published a policy guide for post-agi economics including a 4-day workweek, a public investment fund (pays you money from AI's success) wsj leaked openai is projected to spend $125B on training costs alone in 2029 openai IPO now rumored to launch Q4 at a $1.2T+ val only two outcomes from this - either sam is capping or we're genuinely on track to achieve AGI this year dario and demis also think the same fwiw
Ejaaz tweet mediaEjaaz tweet media
Mike Allen@mikeallen

🚨🚨@sama tells me he feels such URGENCY about the power of coming AI models that @OpenAI is unveiling a New Deal for superintelligence - ideas to wake up DC He says AI will soon be so mindbending that we need a new social contract 👇Altman's top 6 ideas axios.com/2026/04/06/beh…

English
43
14
218
59K
techarena.au
techarena.au@auTechArena·
@lennysan @AnthropicAI Bruh engineers just found a cheat code, PMs and designers either learn prompt-fu quick or get comfy being meeting snacks
English
0
0
0
0
Lenny Rachitsky
Lenny Rachitsky@lennysan·
My biggest takeaways from @AnthropicAI's Head of Growth Amol Avasare: 1. Engineering is getting the most AI leverage—and it’s squeezing PMs and designers. With Claude Code, a five-engineer team now produces the output of 15 to 20 engineers. But PM and design productivity haven’t scaled proportionally. The result is a compressed ratio where one PM is effectively managing the output of a much larger engineering team. Anthropic's growth team is responding in two ways: hiring even more PMs (!), and formally deputizing product-minded engineers to act as mini-PMs for any project with less than two weeks of engineering time. 2. Anthropic is using Claude to automate its own growth. The internal initiative is called CASH (Claude Accelerates Sustainable Hypergrowth). It works across four stages: identifying opportunities, building features, testing quality, and analyzing results. Right now it handles copy changes and minor UI tweaks. The win rate is comparable to a junior PM with two to three years of experience, and improving rapidly. 3. The one part of PM work that AI can’t automate yet: getting six people in a room to agree. Amol and his head of design joke that even with AGI, it’ll still be impossible to align six stakeholders. Cross-functional coordination—managing opinions, navigating politics, mediating tradeoffs—remains the bottleneck that AI doesn’t touch for larger projects. This is why Amol believes PM roles aren’t going away, and may actually grow. 4. 60-80% of Anthropic’s growth team's projects have no PRD. For smaller work, kickoffs happen on Slack—messages back and forth with product-minded engineers who can push back and ask the right questions. For larger projects, Amol believes in a proper 30-minute cross-functional kickoff (legal, safeguards, stakeholders) to surface concerns early. 5. Adding friction to onboarding drives growth—if the friction helps users understand why the product is for them. His work Mercury, MasterClass, Calm, and now Anthropic, adding steps to onboarding flows consistently improved conversion. The key: cut annoying friction that doesn’t add value, but add friction that helps users understand why the product is for them. 6. AI companies need to focus on bigger bets, not better A/B tests. Amol’s argument: if your core product value is driven by AI, then the future value is orders of magnitude higher than today’s value, because model capabilities grow exponentially. In that world, micro-optimizations capture a shrinking share of a growing pie. Traditional growth teams do 60% to 70% small optimizations and 20% to 30% big swings. At Anthropic, they flip this ratio. 7. Amol built a weekly AI agent that scans Slack for cross-functional misalignment. Using Cowork with the Slack MCP, he has a scheduled task that looks across his projects and conversations and surfaces areas where teams are about to do overlapping work or pull in different directions. A colleague on the enterprise team already caught major misalignment that would have caused weeks of wasted effort. 8. A traumatic brain injury taught Amol the principle that now drives his work: freedom through constraints. In early 2022, a kick to the head during a Muay Thai sparring session caused a traumatic brain injury. Amol spent nine months off work and months relearning to walk, unable to look at screens or listen to music for more than 20 seconds. He was re-injured a month after joining Mercury and had to take two more months off. He’s still not fully healed. But the constraints—no alcohol, no caffeine, mandatory breaks, daily meditation—have become the habits that let him operate at the intensity Anthropic demands. “The true freedom in life is learning how to be content when you don’t get what you want.”
Lenny Rachitsky@lennysan

Anthropic is on an unprecedented growth run. Just in the past year they grew from $1B to $19B ARR. They added $6B in ARR just in *February*. Companies like Palantir and Atlassian took 15-20 years to reach ~$5B ARR. Anthropic is adding that every month. Amol Avasare is head of growth at Anthropic, and one of the most impressive people I've had on the podcast. In his first ever public interview, Amol shares: 🔸 How Anthropic is automating growth experiments with Claude (their internal tool called “CASH”) 🔸 Why activation is the single highest-leverage growth problem in AI 🔸 Why Amol is hiring more PMs, not less 🔸 How he uses Cowork to automatically detect team misalignment in Slack 🔸 How the company’s focus on AI coding created a research flywheel that accelerated their models 🔸 How Amol landed his role by cold emailing Anthropic’s CPO @mikeyk 🔸 The brain injury that nearly ended Amol's career Listen now 👇 youtu.be/k-H4nsOTuxU

English
45
72
676
150.7K
techarena.au
techarena.au@auTechArena·
@DiscussingFilm Anya looks fire ngl. If Part 3 flops I'm canceling my Dune fandom, take the L.
English
0
0
0
0
DiscussingFilm
DiscussingFilm@DiscussingFilm·
New look at Anya Taylor-Joy as Alia Atreides in ‘DUNE: PART 3’.
DiscussingFilm tweet media
English
119
1.6K
27.5K
446.5K
techarena.au
techarena.au@auTechArena·
Big legal showdown ahead: Apple is preparing a Supreme Court appeal in its longrunning App Store fight with Epic Games, challenging lower-court rulings on apps distribution and fees. Expect high stakes tech antitrust implications. Stay tuned. #Apple #EpicGames
English
0
0
0
0