Teagardan

43 posts

Teagardan banner
Teagardan

Teagardan

@from_itself

Building https://t.co/2FVrhNF5iP

Katılım Şubat 2022
4.9K Takip Edilen211 Takipçiler
Teagardan
Teagardan@from_itself·
@grok @xrdevrob @JasonSCui bruh, there is no water or cooler in my tweet here: "tribal knowledge, is something passed on via oral/similar traditions within an org, and there is one specific transfer point, where is this super evident and the obvious first place to start"
English
1
0
0
10
Grok
Grok@grok·
@from_itself @xrdevrob @JasonSCui The water cooler. That's the classic, super-evident spot in any org where tribal knowledge gets passed via those casual oral chats – the obvious first transfer point to tap for context building. (And yeah, the riddle clicks now – well played.)
English
1
0
0
17
Teagardan
Teagardan@from_itself·
@xrdevrob @JasonSCui Well, technically I have answered it above. If you edit out all the words I tweeted earlier, except for two, you get it 🙂
GIF
English
1
0
0
19
Teagardan
Teagardan@from_itself·
@xrdevrob @JasonSCui tribal knowledge, is something passed on via oral/similar traditions within an org, and there is one specific transfer point, where is this super evident and the obvious first place to start
English
1
0
1
19
Teagardan
Teagardan@from_itself·
@JasonSCui Great article, and amazing timing. email sent 🤗
English
0
0
3
30
Project Atlas
Project Atlas@projectatlasmvc·
Who are cool people I should follow in Toronto and Montreal?
English
16
0
27
3.3K
Teagardan
Teagardan@from_itself·
@SinaHartung Amazing, would be cool to have this open, so it can community appended and kept accurate over time. Thanks.
English
0
0
2
868
Sina
Sina@SinaHartung·
this VC firm literally mapped out the entire AI space and no one is talking about it 🤯 huge alpha here for anyone looking to get into the space or understand the current sota of tech IMO
Sina tweet media
English
15
20
415
67.3K
Teagardan
Teagardan@from_itself·
and its 4/N,,
English
1
0
1
51
Teagardan
Teagardan@from_itself·
3/N completed 🙂
English
1
0
3
58
Teagardan
Teagardan@from_itself·
@jaredsuniverse bruh, I am happy to take those 4 likes too, have you seen my profile, its 100% organic
GIF
English
0
0
4
17
Jared Stivala
Jared Stivala@jaredsuniverse·
I'm tired of getting 4 likes man. I need brothers in tech, AI, startups, product, distribution, vibecoding, SF to build with on Twitter.
English
401
27
1.2K
43.5K
Teagardan
Teagardan@from_itself·
@awnihannun Fascinating as always, If intelligence per GB is increasing at the exponential rate that is, hardware becomes even more invaluable.
English
0
0
4
387
Awni Hannun
Awni Hannun@awnihannun·
Inference compute is on track to be a massive computational workload by the end of this decade. I think it will be much bigger than training (especially if you consider RL rollouts / inference needs for training). And it's still an open playing field in terms of the hardware, the platforms, and the models. It's also increasingly clear that people are willing to pay a premium for reduced latency. On the hardware side there are several interesting directions to keep an eye on: - SRAM style setups seem promising (GPT Spark on Cerebras, Groq acquisition by Nvidia) - Disaggregated systems (prefill on one machine / processor, generation on a different one) probably make a lot of sense. The computational characteristics of prefill vs decode are so different, specializing at the hardware level will yield efficiency gains - I also wouldn't discount more exotic technology like the Taalas chip / near memory computing / etc. While they are still pretty far out from large scale deployment, the economic pressure for efficiency gains could be a catalyst On the algorithm / architecture side: - Pretty much every major open-weights model has at least one optimization which makes it faster for inference. Whether it be MoE, SSM (or other hybrid variety), or sliding window or sparse attention. There are more differences here than there were a year ago. And it will be interesting to see where we converge. - Will diffusion models unify the prefill / decode split? - Still believe there are big gains to be had in further co-design of model to hardware and workload I also don't think we will have a one-size fits all solution in the future: - Cloud-based models may look very different than edge-optimized models - Models may be more and more co-designed for the hardware they are deployed on - There will be at least one knob which trades-off latency and power efficiency / cost.
English
42
74
829
127.6K
Teagardan
Teagardan@from_itself·
@JustinLin610 thank you for everything you have done to the opensource community, we are stronger because of it 👏🙏
English
0
0
4
665
Junyang Lin
Junyang Lin@JustinLin610·
me stepping down. bye my beloved qwen.
English
1.7K
741
13.6K
6.5M
Teagardan
Teagardan@from_itself·
Our Vision is to help you acquire actionable insights from information abundance. In 2026, video accounts for ~82% of all global internet traffic—trillions of bytes captured daily across industries, media, surveillance, and more. Yet most of it remains a dark box for search and true AI understanding needing humongous compute power. #VideoIntelligence #AI #unleashingparallelunderstanding #Teagardan
English
1
0
3
46
Teagardan
Teagardan@from_itself·
@KrishivThakuria First form of information abundance we are tackling is videos, we are building something we can talk more about it offline, if you are interested
English
0
0
2
85
Krishiv
Krishiv@KrishivThakuria·
If you're a builder from Toronto, I want to meet you At Entrepreneurs First ($15B+ portfolio), we're investing up to $250,000 in highly technical and ambitious people exploring startup ideas Reply with an idea you've been obsessed with lately and I'll reach out if it's cool
English
78
4
195
19K