The Nextool AI

7K posts

The Nextool AI banner
The Nextool AI

The Nextool AI

@nextool_ai

We offer 10,000+ cutting edge AI tools and a newsletter where you can learn AI for free. Explore tools here: https://t.co/msnGiz55Ku

Newsletter → Bergabung Eylül 2024
39 Mengikuti3.9K Pengikut
The Nextool AI me-retweet
God of Prompt
God of Prompt@godofprompt·
Holy shit... Alibaba just dropped a 30B parameter AI agent that beats GPT-4o and DeepSeek-V3 at deep research using only 3.3B active parameters. It's called Tongyi DeepResearch and it's completely open-source. While everyone's scaling to 600B+ parameters, Alibaba proved you can build SOTA reasoning agents by being smarter about training, not bigger. Here's what makes this insane: The breakthrough isn't size it's the training paradigm. Most AI labs do standard post-training (SFT + RL). Alibaba added "agentic mid-training" a bridge phase that teaches the model how to think like an agent before it even learns specific tasks. Think of it like this: Pre-training = learning language Agentic mid-training = learning how agents behave Post-training = mastering specific agent tasks This solves the alignment conflict where models try to learn agentic capabilities and user preferences simultaneously. The data engine is fully synthetic. Zero human annotation. Everything from PhD-level research questions to multi-hop reasoning chains is generated by AI. They built a knowledge graph system that samples entities, injects uncertainty, and scales difficulty automatically. 20% of training samples exceed 32K tokens with 10+ tool invocations. That's superhuman complexity. The results speak for themselves: 32.9% on Humanity's Last Exam (vs 26.6% OpenAI DeepResearch) 43.4% on BrowseComp (vs 30.0% DeepSeek-V3.1) 75.0% on xbench-DeepSearch (vs 70.0% GLM-4.5) 90.6% on FRAMES (highest score) With Heavy Mode (parallel agents + synthesis), it hits 38.3% on HLE and 58.3% on BrowseComp. What's wild: They trained this on 2 H100s for 2 days at <$500 cost for specific tasks. Most AI companies burn millions scaling to 600B+ parameters. Alibaba proved parameter efficiency + smart training >>> brute force scale. The bigger story? Agentic models are the future. Models that autonomously search, reason, code, and synthesize information across 128K context windows. Tongyi DeepResearch just showed the entire industry they're overcomplicating it. Full paper: arxiv. org/abs/2510.24701 GitHub: github. com/Alibaba-NLP/DeepResearch
God of Prompt tweet media
English
44
186
896
86.6K
The Nextool AI me-retweet
Claude
Claude@claudeai·
Introducing Claude Code on the web. You can now delegate coding tasks to Claude without opening your terminal.
English
257
786
6.1K
2.7M
The Nextool AI
The Nextool AI@nextool_ai·
@alex_prompter it was something else. a "god" for some... great work by these researchers
English
0
0
0
120
The Nextool AI me-retweet
Alex Prompter
Alex Prompter@alex_prompter·
I finally understand what AGI actually means… and it’s all thanks to a new paper from some of the biggest names in AI Yoshua Bengio, Dawn Song, Max Tegmark, Eric Schmidt, and others. For years, everyone’s been throwing the term AGI around like it’s some mystical milestone. But this paper finally pins it down with a definition that actually makes sense. They describe Artificial General Intelligence as an 'AI that can match the cognitive versatility and proficiency of a well-educated adult.' No marketing spin. No vague “human-level” claims. Just a clear benchmark based on how human intelligence actually works. The researchers built their framework around something called the Cattell–Horn–Carroll model, which psychologists use to measure human cognitive ability. It breaks intelligence down into ten areas things like reasoning, memory, math, language, perception, and speed. Then they did something bold: they tested real AI models against those same standards. And here’s what they found: - GPT-4 scored 27% toward AGI. - GPT-5 jumped to 58%. In other words, the latest model performs at more than half the cognitive range of an average human adult. But it’s not there yet. The biggest weakness? Long-term memory both GPT-4 and GPT-5 scored 0% in the ability to store and recall new information over time. So yes, we’re making real progress. But we’re still missing something fundamental the ability to remember and learn continuously. What’s incredible about this paper is that it finally gives us a way to track that progress. For the first time ever, AGI has a number. And right now, we’re sitting at 58%.
Alex Prompter tweet media
English
43
104
472
44.7K
The Nextool AI me-retweet
Investing Expert
Investing Expert@ExpertRealState·
99% of "future investors" never buy their first property. Not because they can't find deals. Because they don't have a system. I analyzed what separates those who actually close from those stuck in "research mode" for years. Here's the brutal truth (and the exact path that works): Most people don't fail because of bad deals. They fail because they chase every property instead of defining their buy box. They trust gut feelings instead of numbers. They prepare for perfection instead of building systems. The 1% who actually do it follow one path: Think → Buy → Own → Receive. 1. THINK stage kills 70% of beginners. They never write down their "why." So when the first offer gets rejected, they quit. The ones who make it? One clear sentence: "I'm buying two cash-flowing doors to cover my mortgage in 24 months." Clarity beats motivation every time. 2. BUY stage kills another 20%. They wait for the "perfect" deal instead of perfecting their criteria. Millionaire investors start with a formula: Price range: 3x annual income Cash flow: $200+/door Location: 15 min from growth job centers They stop guessing. They measure. 3. OWN stage is where amateurs burn out. They manage with emotion. They underinsure. They over-upgrade. Smart owners build rules before problems: Rent > 1% rule (if not, pass) 6 months of reserves Two contractors, never one Systems replace stress. 4. RECEIVE stage is where wealth compounds. But most never get here. They sell too early or refinance wrong. Those who "receive" don't chase flips. They chase cash flow velocity how fast each dollar comes back to work. They play a 10-year game in a 1-year market. 5. You don't need genius. You need stamina. The ones who win aren't the best at spreadsheets they're the best at sticking to boring, repeatable rules while everyone else gets distracted by trends. Save this 3-line checklist: Define your why in one sentence Write your buy box and stop chasing outside it Build 6 months reserves before scaling Do that for 90 days. You'll already be ahead of 99% of "future investors." Anyone can do it. Not everyone will. You get to decide which side of that sentence you live on.
Investing Expert tweet media
English
0
6
13
3.1K
wetracked.io
wetracked.io@wetracked·
What if ChatGPT could do your ad tracking audits? Now it can. Here’s a free pack of 10 advanced prompts to: → Audit your pixel setup → Find broken events → Simulate lost revenue → Generate a fix plan Comment "Send" and I'll DM you the 10 prompts. (Follow me to receive it)
wetracked.io tweet media
English
3
10
13
4.4K
The Nextool AI me-retweet
Hasan Toor
Hasan Toor@hasantoxr·
Bruh… it’s over for 90% of video studios. 💀 Google’s Veo 3.1 can now generate films with character consistency, emotional flow, and dynamic camera movement. Examples + how to try:
English
12
2
29
4K
The Nextool AI me-retweet
Ivan | AI | Marketing
Ivan | AI | Marketing@aivanlogic·
DON'T USE CHATGPT FOR RESEARCH. Use Perplexity AI. Here are 10 prompts to automate all your research tasks like a pro:
Ivan | AI | Marketing tweet media
English
1
7
13
4.3K
The Nextool AI me-retweet
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Veo 3.1 just got its first real production tool. @invideoOfficial launched full integration today and it changes everything. You can now go from 5-second clips → complete cinematic videos with perfect continuity. Here's what just became possible 👇
English
7
16
81
17.5K
The Nextool AI me-retweet
Ryan Hart
Ryan Hart@thisdudelikesAI·
🚨Google’s NotebookLM got an upgrade so crazy it feels like cheating. NotebookLM now learns from your docs, builds summaries, and answers questions like it knows your brain. Here’s everything you need to know🧵👇
Ryan Hart tweet media
English
3
16
43
6.9K
God of Prompt
God of Prompt@godofprompt·
Holy shit... we just crossed a line nobody was paying attention to 🤯 AI is now writing more articles than humans on the entire internet. A new study from Graphite just dropped the numbers, and they're absolutely brutal. In November 2024, AI-generated articles officially surpassed human-written content. Not "catching up." Not "approaching parity." Straight-up MORE AI content than human content being published. Here's the timeline that should terrify every content creator: → November 2022: ChatGPT launches → November 2023: 39% of all articles are AI-generated (just 12 months later) → November 2024: AI articles surpass human articles The researchers analyzed 65,000 URLs from CommonCrawl and ran them through AI detection algorithms. The false positive rate? Only 4.2%. False negative rate? 0.6%. These numbers are solid. But here's the plot twist nobody expected: The growth has plateaued since May 2024. AI article production isn't accelerating anymore. It's stable. The researchers think they know why: practitioners discovered AI-generated content doesn't perform well in search rankings. Translation? The market corrected itself. Companies burned through budgets publishing AI slop, realized it didn't drive traffic, and pumped the brakes. The methodology is fascinating. They used articles published BEFORE ChatGPT launched (Jan 2020 - Nov 2022) as their "definitely human" baseline. Smart move. Then they generated 6,000 test articles using GPT-4o to test the detector's accuracy. 99.4% detection rate on AI content. This isn't guesswork. What they didn't measure makes this even wilder: AI-assisted articles (human-edited AI drafts) weren't counted. The real percentage of AI-touched content is probably way higher. The study also found that despite AI articles flooding the web, they largely don't appear in Google results or ChatGPT responses. The internet is full of AI content that nobody sees. We're living through the largest content production shift in human history, and most people have no idea it happened. Every content strategist who said "AI will never replace writers" just got proven wrong by the data. The question isn't whether AI can write content anymore. It's whether AI-generated content can actually compete for attention in a world already drowning in it. The content revolution already happened. You just didn't notice.
God of Prompt tweet media
English
25
44
152
38.2K
The Nextool AI me-retweet
Google DeepMind
Google DeepMind@GoogleDeepMind·
Veo is getting a major upgrade. 🚀 We’re rolling out Veo 3.1, our updated video generation model, alongside improved creative controls for filmmakers, storytellers, and developers - many of them with audio. 🧵
English
118
409
2.4K
748.5K
Spencer Baggins
Spencer Baggins@bigaiguy·
This is insane 🤯 McKinsey just reported that 80% of companies now use AI but only 1% are doing it well. That single number explains why every “AI initiative” you’ve heard about sounds impressive but delivers nothing. Most companies didn’t integrate AI. They *outsourced curiosity.* They bought licenses, ran pilots, and called it innovation. The few that *did* transform they look nothing like the rest. They rebuilt workflows around models, retrained staff to think in prompts not PowerPoints, and put AI governance right next to the CFO. McKinsey’s numbers are brutal: - Only 21% redesigned core workflows for gen-AI. - Just 28% have their CEO personally leading governance. - Those who did are already reporting higher EBIT from gen-AI. Everyone else is still “experimenting.” Translation: burning budget on proofs of concept with no system design, no feedback loops, and no metrics. The lesson’s obvious: AI ROI doesn’t come from the model. It comes from the org that’s willing to *break itself* to use it. The future isn’t AI-native companies. It’s *AI-rewired* ones.
Spencer Baggins tweet media
English
5
4
11
1.6K
The Nextool AI me-retweet
Brady Long
Brady Long@thisguyknowsai·
Alibaba just released a new lineup of next-gen Qwen3 AI models - an update that takes multimodal performance to another level. They’re incredibly fast. They’re extremely capable. And they’re doing things I didn’t think were possible yet. Here are 5 that blew my mind 👇
English
8
14
25
6.1K
The Nextool AI me-retweet
Robert Youssef
Robert Youssef@rryssf_·
Holy shit... Tencent researchers just killed fine-tuning AND reinforcement learning in one shot 😳 They call it Training-Free GRPO (Group Relative Policy Optimization). Instead of updating weights, the model literally learns from 'its own experiences' like an evolving memory that refines how it thinks without ever touching parameters. Here’s what’s wild: - No fine-tuning. No gradients. - Uses only 100 examples. - Outperforms $10,000+ RL setups. - Total cost? $18. It introspects its own rollouts, extracts what worked, and stores that as “semantic advantage” a natural language form of reinforcement. LLMs are basically teaching themselves 'how' to think, not just 'what' to output. This could make traditional RL and fine-tuning obsolete. We’re entering the “training-free” era of AI optimization.
Robert Youssef tweet media
English
131
590
3.6K
641.5K
The Nextool AI me-retweet
God of Prompt
God of Prompt@godofprompt·
🚨 This paper might be the bridge between logic and intelligence. It’s called Tensor Logic, and it turns logical reasoning into pure tensor algebra no symbols, no heuristics, just math. Here’s the wild part: Logical propositions become vectors. Inference rules become tensor contractions. Truth values propagate as continuous operations meaning deduction and neural computation now speak the same language. This isn’t symbolic AI or deep learning. It’s both. Tensor Logic proves that Boolean reasoning, probabilistic inference, and even predicate logic can all be embedded inside a single differentiable framework. Every major AI model today struggles with consistency and reasoning because logic is discrete and gradients are continuous. Tensor Logic erases that boundary. In experiments, the system performs logical inference as matrix math, allowing neural nets to reason with symbolic precision — and symbolic systems to learn like neural nets. If this scales, we might finally get models that don’t just predict truths — they can prove them. The fusion of logic and learning just got real. Paper: “Tensor Logic: A Unified Framework for Differentiable Reasoning”
God of Prompt tweet media
English
107
285
1.6K
146.9K
The Nextool AI me-retweet
Dr Alex Young ⚡️
Dr Alex Young ⚡️@AlexanderFYoung·
🚨 Google might’ve just reinvented how AI thinks They built something called TUMIX, and it’s basically AI teamwork on steroids. Instead of one mega model doing everything, Google made a squad of mini AIs that debate, challenge, and refine each other’s answers in real time. One codes. Another searches. Another reasons. They argue → share → vote → and reach consensus. The result? Gemini-2.5 + TUMIX crushes every reasoning benchmark by +17.4% at half the inference cost. No retraining. No extra data. Just coordination. Here’s the twist: A diverse team of 15 small agents outperformed 15 copies of the “best” single model. Then Gemini started designing its own new teammates and got even better. The system literally evolved smarter versions of itself. Forget trillion-parameter monsters. The future of AI is small minds that think together. Read the full paper: arxiv. org/abs/2510.01279
Dr Alex Young ⚡️ tweet media
English
4
5
15
2.8K
The Nextool AI me-retweet
Alex Hughes
Alex Hughes@alxnderhughes·
This is truly wild 👇 A new system called Paper2Video can read a scientific paper and automatically create a full presentation video slides, narration, subtitles, even a talking head of the author. It’s called PaperTalker, and it beat human-made videos in comprehension tests. Hours of academic video editing… gone. AI now explains your research better than you do. Here’s how it works (and why this changes everything):
Alex Hughes tweet media
English
1
3
14
2.4K