AJ...

1.7K posts

AJ... banner
AJ...

AJ...

@DrSpokk

Made in Detroit but ATL active. Fan of tech and advocate for Black entrepreneurs & African investments. Investor & recovering engineer. #Xoogler #DealTeamSIXX

Globe-trotting Katılım Aralık 2010
5.7K Takip Edilen500 Takipçiler
AJ... retweetledi
Georgia Tech
Georgia Tech@GeorgiaTech·
BREAKING: Georgia Tech officially bans "hell" and "helluva" from its fight song, flags on the Ramblin' Wreck, and Rat Caps. "Doozy" will replace “helluva," and "tarnation" will replace "hell." #TTWg 🐝| c.gatech.edu/tarnation
Georgia Tech tweet mediaGeorgia Tech tweet mediaGeorgia Tech tweet mediaGeorgia Tech tweet media
English
148
61
806
254K
Richard Seroter
Richard Seroter@rseroter·
Inside @google, we have a system for sending small bonuses to peers that helped us out. It's used often, and builds a culture of gratitude. We added an AI tool that scans your chats, emails, whatever and generates a report that shows who helped you the most lately. So handy.
English
184
104
4.5K
578.3K
AJ... retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 For those who are still in doubt, this is the U.S. Copyright Office's official opinion on the COPYRIGHTABILITY of AI-generated works: "The Office concludes that, given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectible ideas. While highly detailed prompts could contain the user’s desired expressive elements, at present, they do not control how the AI system processes them in generating the output." (page 18) Let's recap: - Any copyright claim involving AI must demonstrate HUMAN control over creative elements. - The assessment is done on a case-by-case basis. - AI-assisted is NOT the same as AI-generated. - AI-generated works without any HUMAN creative intervention are NOT copyrightable. - For some reason, every time I write about this tipic here or in my newsletter, some people get angry and try to deny the information above. I wonder why... Make sure to share the report with friends so they know what to expect in the U.S. - 👉 Link to the full report below 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 91,700+ subscribers below.
Luiza Jarovsky, PhD tweet media
English
78
354
881
73K
AJ...
AJ...@DrSpokk·
The disruption caused by the emerging real time translation tech....
Aakash Gupta@aakashgupta

Duolingo closed at $112 yesterday. That’s down 70% from its May 2025 high of $544. The revenue story is real. Duolingo went from $162M in 2021 to nearly $1B in trailing revenue. 41% YoY growth last quarter. 50 million DAUs. Record EBITDA margins approaching 30%. By every operating metric, this company is executing at an elite level. So why has the market vaporized $16B+ in market cap since May? Three things happened at once. Bookings growth decelerated from the pandemic-era highs, and Goldman and Wells Fargo both flagged “challenging comparisons” ahead. The CFO who steered the company through its entire public life announced his departure. And the AI narrative flipped from tailwind to existential threat in investor minds. Then T-Mobile dropped a bomb three days ago. They announced “Live Translation,” a real-time AI translation service built directly into their wireless network. Over 50 languages. No app, no download, no subscription. Works on a flip phone. The stock fell another 10% in a single session. The market is now asking a question it never had to ask before: if AI can translate any conversation in real time at the network level, what’s the premium on spending 2,000 hours learning a language the hard way? Duolingo trades at 6x sales today. At its peak it traded at 30x+. The business grew into its valuation and then the valuation collapsed anyway. The market isn’t repricing the present. It’s repricing a future where the entire category of “language learning” gets compressed by AI that skips the learning part entirely. Revenue up 410% and stock down 70%. The market is telling you that growth in a category AI might eliminate gets valued at zero.

English
0
0
0
11
ZAYVEN KNOX
ZAYVEN KNOX@ZayvenKnox·
Google Gemini is the smartest AI right now. But 90% of people prompt it like ChatGPT. That's why I made the Gemini Mastery Guide: → How Gemini thinks differently → Prompts built for Gemini → 2000+ AI Prompts Comment "Gemini" and I'll DM it free. (Note: no follow=no DM)
ZAYVEN KNOX tweet media
English
445
90
450
47.1K
Hasan Toor
Hasan Toor@hasantoxr·
Google Gemini is the smartest AI right now. But 90% of people prompt it like ChatGPT. That's why I made the Gemini Mastery Guide: → How Gemini thinks differently → Prompts built for Gemini → 2000+ AI Prompts Comment "Gemini" and I'll DM it free.
Hasan Toor tweet media
English
3.3K
242
3K
415.7K
AJ...
AJ...@DrSpokk·
Aakash Gupta@aakashgupta

Your brain has a hard cap on what it can process. About 50 bits of conscious information per second, adding up to roughly 125 billion bits across your lifetime. That’s your total cognitive budget. Every notification, every open tab, every unfinished task burns through that number permanently. You’re spending down a fixed account every day and most people never realize it. This is why focus isn’t about willpower. It’s about allocation. When your mind runs background processes (emails to send, conversations replaying, forgotten tasks), you’re fragmenting a scarce resource across competing demands. The inability to concentrate isn’t a character flaw. It’s a processing bottleneck. Your RAM is maxed out before you even sit down to work. Flow states solve this by collapsing attention onto a single target. The formula is challenge slightly above current skill level. Too high triggers anxiety. Too low triggers boredom. Both break focus. The sweet spot creates cognitive lock-in where the task absorbs full capacity and everything else disappears. Video games figured this out decades ago with dynamic difficulty. Most people’s actual work never does. The counterintuitive piece is rest. When you stop focused work, your brain doesn’t power down. It shifts into the Default Mode Network, connecting regions associated with creativity and future planning. Energy consumption barely drops. Your subconscious is processing everything your conscious mind couldn’t fit. This is why walks and showers produce breakthroughs. But only if you let the network run. Fill every gap with podcasts and scrolling and you never give it space to work. The rhythm matters more than the hours. Intense focus, then genuine rest. Building, then processing. Most people either grind constantly and starve the system that generates their best ideas, or rest constantly and never build anything worth processing. The people who transform fastest alternate between both, letting each system do what it’s designed for.

QHT
0
0
0
14
AJ... retweetledi
Dhairya
Dhairya@dkare1009·
𝗖𝗹𝗼𝘂𝗱 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 Learning cloud computing in 2025 is crucial because it powers modern businesses, AI, and scalable technology solutions. Notes Zero To Hero Complete Cloud Computing Handwritten Notes. Simply: 1. Follow me (so I will Dm) 2. Like and Repost 3. Comment “send” To Recieved
Dhairya tweet media
English
161
160
524
37.9K
AJ... retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
BIG NEWS: Meta’s chief AI scientist Yann LeCun is reportedly preparing to leave to start a new company. LeCun has long argued that current large language models (LLMs) are “useful” tools but not the path to human-like reasoning, and his planned startup will focus on “world models” that learn from video and spatial signals to plan and act, which is a very different bet from scaling text-only systems. Inside Meta, power shifted from the long-horizon FAIR research group that LeCun created in 2013 to new product-aimed units, including a handpicked team building the next Llama models with aggressive hiring packages, and LeCun is now reporting into Wang rather than the previous product chain. So overall, Meta is doubling down on near-term LLM products under new leadership, while LeCun is stepping out to prove that video-grounded world models can close the reasoning gap that scaling text models has not. --- ft .com/content/c586eb77-a16e-4363-ab0b-e877898b70de
Rohan Paul tweet media
English
11
15
105
19.5K
AJ... retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
🧠 "The Impact of Artificial Intelligence on Human Thought" A big 132 page report. AI is shifting real thinking work onto external systems, which boosts convenience but can weaken the effort that builds understanding and judgment, Says, with AI's help Cognitive offloading cuts the mental work people invest in tasks, which boosts convenience in the moment but can weaken critical thinking and creativity over time. With AI, personalized feeds lock users into filter bubbles, so views polarize across groups while language and reasoning become more uniform inside each group. It recommends, use AI to cut noise and routine steps, but keep humans doing the heavy mental lifting, and add controls because personalization, deepfakes, and opaque models can steer choices at scale. 🧵 Read on 👇
Rohan Paul tweet media
English
21
41
219
14.7K
AJ...
AJ...@DrSpokk·
Definitely looking into this for the little princess! #GirlDad #futureofeducation
Alex Prompter@alex_prompter

This is going to revolutionize education 📚 Google just launched "Learn Your Way" that basically takes whatever boring chapter you're supposed to read and rebuilds it around stuff you actually give a damn about. Like if you're into basketball and have to learn Newton's laws, suddenly all the examples are about dribbling and shooting. Art kid studying economics? Now it's all gallery auctions and art markets. Here's what got me though. They didn't just find-and-replace examples like most "personalized" learning crap does. The AI actually generates different ways to consume the same information: - Mind maps if you think visually - Audio lessons with these weird simulated teacher conversations - Timelines you can click around - Quizzes that change based on what you're screwing up They tested this on 60 high schoolers. Random assignment, proper study design. Kids using their system absolutely destroyed the regular textbook group on both immediate testing and when they came back three days later. Every single one said it made them more confident. The part that surprised me? They actually solved the accuracy problem. Most ed-tech either dumbs everything down to nothing or gets basic facts wrong. These guys had real pedagogical experts evaluate every piece on like eight different measures. Look, textbooks have sucked for centuries not because publishers are idiots, but because making personalized versions was basically impossible at scale. That just changed. This isn't some K-12 thing either. Corporate training could work this way. Technical documentation. Professional development. Imagine if every boring compliance course used examples from your actual job instead of generic office scenarios. We might have just watched the industrial education model crack for the first time. About damn time.

English
0
0
1
127
AJ...
AJ...@DrSpokk·
Heavy capital investment required to enable AI...will be interesting to see which players can withstand the required runway before sufficient ROI is realized. #longGameOfAI
Rohan Paul@rohanpaul_ai

Love @McKinsey reports. By 2030, AI data centers will need to spend a whopping $6.7 trillion on computing to keep up with demand. AI demand alone will require $5.2 trillion in investment of that $5.2, the largest share of investment, 60% ($3.1 trillion), will go to technology developers and designers, which produce chips and computing hardware for data centers. approximately 15% ($0.8 trillion) of investment will flow to builders for land, materials, and site development. Another 25 % ($1.3 trillion) will be allocated to energizers for power generation and transmission, cooling, and electrical equipment. Companies across the compute power value chain that proactively secure critical resources—land, materials, energy capacity, and computing power—could gain a significant competitive edge.

English
0
0
1
30
AJ...
AJ...@DrSpokk·
Rohan Paul@rohanpaul_ai

Harvard study, published in nature finds a research‑based AI tutor yielded larger learning gains than an in‑class active‑learning lesson. Students learned more in less time with AI, and they felt more engaged and more motivated. Harvard ran a randomized, controlled crossover study in an intro physics course with 194 students. Each student did 2 short units, 1 with the AI tutor at home and 1 with active learning in class, with the same content, the same worksheets, and pre- and post-tests. The AI group’s median post-test score was 4.5 versus 3.5 for the in-class group, starting from a combined baseline of 2.75. A regression with many controls still showed a large effect, and a ceiling-robust estimate put the effect size in the 0.73 to 1.3 standard deviation range, with p<0.00000001. Students using the tutor finished faster, with a 49 minute median versus the class’s assumed 60 minutes, and 70% of them spent under 60 minutes. They also reported higher engagement, 4.1 vs 3.6, and higher motivation, 3.4 vs 3.1, with similar enjoyment and growth-mindset ratings. Why it worked comes down to design, not just the model. The team embedded proven teaching moves into the tutor: active questioning, small steps that manage mental load, clear scaffolding, timely feedback, and self-pacing so each learner can slow down or speed up as needed. To keep answers accurate, the prompts included full worked solutions, so the model could focus on checking understanding and explaining, rather than improvising. 83% of students said the tutor’s explanations were as good as or better than the human instructors. The practical takeaway is simple, plug a carefully structured AI tutor in before class to level up baseline understanding, then spend live time on harder problem solving and projects. Used this way, the tutor complements instructors and improves outcomes. --- nature. com/articles/s41598-025-97652-6

QHT
0
0
0
11