Brian T. Kent

459 posts

Brian T. Kent

Brian T. Kent

@btk

Engineering , Software, ai , Construction, Manufacturing and Real Estate. Digital Fabrication. Engineer / Entrepreneur / Investor

NYC Tham gia Mart 2007
1.2K Đang theo dõi266 Người theo dõi
Brian T. Kent đã retweet
Daniel Pourbaba
Daniel Pourbaba@DPOURBABA·
A lot of developer friends ask how we design and build beautiful buildings with budgets that beat far less attractive projects. There are many levers. But most deals quietly die in one place: Structure. Here are some pro tips, particularly for multifamily podium design: Column grid Keep it tight: 24–28 ft max. Go wider and you trigger thicker PT slabs, drop panels, punching shear steel, and endless MEP conflicts. The last one might be the most painful, but the first two are the most expensive. Load path Never shift columns between floors. Transfers = heavier structure, more rebar, slower schedules, real money burned. Don’t approve a schematic design layout before this is flushed out. Slabs & soils Bad soils force thicker slabs, mats, piles. Foundation costs can jump 2–3×. Choose sites carefully. Get good soils. Expansive soils? We’re out. MEPs Stack wet walls. Have dedicated plumbing walls with no structural value. Lock sleeves early. Another killer: Bathrooms over columns or even electrical rooms. Late MEP coordination are how “on-budget” jobs blow up in the field. Shear & hold-downs Maintain continuous exterior wall zones (~12–16”) from podium to roof. Clean load paths = less steel, simpler inspections, better seismic performance. Wood framing Align shear walls with column grids. Misalignment adds transfer forces and structural weight you don’t get paid for. Again, don’t even go past schematic phase until this is sorted out. Only exception. Facade area. Cost effective constructions isn’t about cheap finishes. They’re about disciplined structure, driven by architectural design logic. Get this right, you’re half way there. Get it wrong, no amount of value engineering will save you.
English
16
23
444
82.2K
wavefnx
wavefnx@wavefnx·
Let's talk history, aesthetics and massively parallel supercomputing. Thinking Machines Corp (1983-94) was founded by Hillis & Handler as a result of their PhD thesis at MIT. The Connection Machine was an alternative to von Neumann arch and designed for Artificial Intelligence.
wavefnx tweet media
English
5
12
168
14K
scott belsky
scott belsky@scottbelsky·
I am surprised there is no single app that simply (1) collects all of your health data points - from whoop/oura data, blood tests, PDFs of lab tests you feed it, other scraped sources, AND (2) auto-generates a system prompt of sorts for any LLM when you ask any health question..?
English
160
35
1.2K
229.5K
Brian T. Kent đã retweet
Rohan Paul
Rohan Paul@rohanpaul_ai·
AI just learned to fine-tune itself between questions. MIT introduces SEAL, a framework enabling LLMs to self-edit and update their weights via reinforcement learning, all by itself. LLMs consume whatever data they are given, so they stay frozen after pretraining. SEAL teaches a model to write its own study material, fine-tune on it, and keep learning. ⚙️ The Core Concepts The core idea behind SEAL is to enable language models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing. The model’s training objective is to directly generate these self-edits (SEs) using data provided within the model’s context. Each self-edit is a text directive that specifies how to make synthetic data and set hyperparameters for updating weights. The generation of these self-edits is learned through reinforcement learning. The model is rewarded when the generated self-edits, once applied, lead to improved performance on the target task.
Rohan Paul tweet media
English
19
86
455
35.6K
Brian T. Kent
Brian T. Kent@btk·
@svlevine @ylecun In text, the next word is a single token and all previous tokens are updated to the latest context ready for the next pass so there is a clear concept of a collective "past". In video everything is constantly in the present.
English
0
0
0
95
Sergey Levine
Sergey Levine@svlevine·
I always found it puzzling how language models learn so much from next-token prediction, while video models learn so little from next frame prediction. Maybe it's because LLMs are actually brain scanners in disguise. Idle musings in my new blog post: sergeylevine.substack.com/p/language-mod…
English
51
170
1.3K
314.4K
Brian T. Kent
Brian T. Kent@btk·
It is not about making the thing, it is about being able to make the machine that makes the thing.
English
0
0
0
408
Brian T. Kent đã retweet
Rohan Paul
Rohan Paul@rohanpaul_ai·
Brillaint 🔥 @GoogleDeepMind launched AlphaEvolve, a Gemini-powered coding agent that discovers and evolves algorithms, outperforming its predecessor AlphaTensor. It achieved 23% matrix kernel speedup, 32.5% GPU kernel boost, and 0.7% compute recovery in data centers, driving major efficiency gains in AI training, chip design, and math problem-solving. ⚙️ The Details → AlphaEvolve uses an ensemble of Gemini models—Flash for exploration and Pro for deep refinement—to generate and evolve code for algorithmic tasks. Programs are scored via automated evaluators, enabling objective progress tracking. → It discovered faster matrix multiplication algorithms, including a 4x4 complex matrix multiplication using only 48 scalar ops, improving on Strassen’s algorithm. → It recovered 0.7% of compute resources globally via improved data center scheduling with interpretable heuristics. → Contributed Verilog-level enhancements to Google's TPU design pipeline. → Delivered a 23% speedup in matrix ops and 1% reduction in Gemini model training time. → Achieved 32.5% acceleration on FlashAttention GPU kernels, reducing dev cycles from weeks to days. → AlphaEvolve doesn't just optimize—it innovates. Given only a minimal code skeleton, it co-designed a novel gradient-based optimization procedure, creating multiple new matrix multiplication algorithms. → Solved parts of 50+ open math problems, improving 20% of them, including the 11D kissing number problem with 593 spheres. → It rediscovered state-of-the-art solutions in 75% of experiments across number theory, geometry, analysis, and combinatorics, validating its mathematical depth. → In 20% of cases, AlphaEvolve pushed the frontier, like improving the kissing number problem in 11D with a new configuration of 593 spheres, setting a new lower bound. → Setup time for most math problems was just hours, enabling fast iteration and exploration across diverse domains. → Early access for academics is being planned, signaling potential broader availability.
Rohan Paul tweet media
Google DeepMind@GoogleDeepMind

Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery. It’s able to: 🔘 Design faster matrix multiplication algorithms 🔘 Find new solutions to open math problems 🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵

English
2
19
84
14.4K
Andy Dunn
Andy Dunn@dunn·
podcast name 🎙️ about turning negatives to positives
English
6
0
2
1.1K
Brian T. Kent
Brian T. Kent@btk·
@JacobEdwardInc Perhaps seeing results will prove to be a gateway drug in a positive sense to people connecting the mental dots between the direct correlation of quantity consumed and weight. I think lesser of evils is rational if nothing else resonates. Anything to get started.
English
1
0
1
32
Jacob Edward
Jacob Edward@JacobEdwardInc·
Does this mean that we should ignore the remaining 95%+ who simply won’t adhere to a significant lifestyle change? These people are mothers, fathers, brothers, sisters… Of course we shouldn’t.
English
2
0
0
136
Jacob Edward
Jacob Edward@JacobEdwardInc·
Why I am in favor of GLP1 medications. (SEMAGLUTIDE & TIRZEPATIDE) & Why I have started my own telemedicine practice providing these medications. 👇🏻
English
1
1
0
1.5K
Brian T. Kent
Brian T. Kent@btk·
@thedankoe @dvassallo Would the analogy be closer to being able to build a professional automatic cappacino machine for the price of single Starbucks grande.
English
0
0
0
28
DAN KOE
DAN KOE@thedankoe·
“Everyone is going to make their own apps with AI” My friend you don’t even make your own food. You’ll still pay a few bucks to use an app.
English
440
639
10.7K
462.3K
Packy McCormick
Packy McCormick@packyM·
My packed subway came out of the tunnel and onto the bridge to this and I was really expecting us to break into a round of applause (we didn’t).
Packy McCormick tweet mediaPacky McCormick tweet mediaPacky McCormick tweet mediaPacky McCormick tweet media
English
31
3
223
15.5K
Brian T. Kent đã retweet
Wu Tang is for the Children
Wu Tang is for the Children@WUTangKids·
Adam Sandler introduces Nirvana's surviving members with Post Malone to perform "Smells Like Teen Spirit" at SNL's 50th anniversary concert 🔥
English
1.8K
9.9K
90.6K
8.6M
Brian T. Kent
Brian T. Kent@btk·
@patio11 @paulg Perhaps a compromise is working on apps that call the generative apis. So you have a foot in both.
English
0
0
0
118
Patrick McKenzie
Patrick McKenzie@patio11·
@paulg In some ways it is the old calculator problem on steroids. And I worry that this applies to a large subset of all things to teach. "You're going to go through an extended period of being bad at it. Everyone does... unless they use the magic answer box, which is really good."
English
13
7
250
15.6K
Paul Graham
Paul Graham@paulg·
I have the nagging feeling that there's going to be something very obvious about AI once it crosses a certain threshold that I could foresee now if I tried harder. Not that it's going to enslave us. I already worry about that. I mean something subtler.
English
1.4K
521
11.6K
2.1M
Brian T. Kent đã retweet
Chubby♨️
Chubby♨️@kimmonismus·
AR breakthrough - Impose 2D MRI results on Real Life Patients. This is just amazing.
English
125
723
5.7K
881.4K
Brian T. Kent
Brian T. Kent@btk·
@Trace_Cohen @bryanlanders And the use of "search" which is a new and distinct layer on top of transformers that are decision making algorithms that are more deterministic than the pure statistical brute force layer based self attention. Search is what has powered very specific ai like poker bots etc.
English
0
0
1
102
Brian T. Kent
Brian T. Kent@btk·
@Trace_Cohen @bryanlanders No. Chain of Thought (thinking about thoughts) was introduced in o1 and the latest Gemini and in some ways people were doing this with prompt engineering , function calling , structured outputs and recursive calls...but o3 appears to be this on steroids.
English
1
0
1
76
Trace Cohen
Trace Cohen@Trace_Cohen·
So o3 is amazing apparently but what fundamentally changed to get their breakthrough? I thought we hit a wall!? 🍓
English
5
0
8
2.2K
Brian T. Kent đã retweet
Rohan Paul
Rohan Paul@rohanpaul_ai·
Nice collection of LLM papers, blogs, and projects, focussing on OpenAI o1 and reasoning techniques. What it offers: 📌 Curates papers, blogs, talks, and Twitter discussions about OpenAI's o1 and LLM reasoning 📌 Tracks frontier developments in LLM reasoning capabilities and techniques
Rohan Paul tweet media
English
3
100
458
30.2K
Brian T. Kent
Brian T. Kent@btk·
"The world will ask you who you are, and if you do not know, the world will tell you" - attributed to Carl Jung.  Having and working hard towards a goal breeds Agency and confidence.
Brian T. Kent@btk

@SMB_Attorney He was asking...kids may naturally have a scarcity mindset which can cause anxiety. I don't believe in "you can be anything you want" but rather "there are successful people in every field, become an expert in something you like. Always have a Plan A and you will have a job"

English
1
0
3
176