appughar

1.9K posts

appughar banner
appughar

appughar

@appughar

CEO & co-founder, Mobius Labs (exited to Dropbox). Making Dash multimodal. Trying to make computers see.

space-time continuum Katılım Aralık 2008
485 Takip Edilen420 Takipçiler
appughar
appughar@appughar·
@dennismuellr @paulg Isn’t litigation from standard VC financings rare on both sides of the Atlantic? The notary process seems to solve a non-problem while adding friction. As conveyed by the notary, some capital increases were rejected by the commercial court because the PoA was not notarized.
English
1
0
0
64
dennis
dennis@dennismuellr·
@appughar @paulg from what i understand, that is up to the notary. not like i'm a notary maxi, but i do think they make the fair point that there's way less litigation in germany than eg in the US because they're a sort of pre-judge.
English
1
0
1
205
appughar
appughar@appughar·
@dennismuellr @paulg Even with a PoA, the notary accepts only notarized and apostilled versions. This pushed some our us-based angels to consider exiting in later rounds, forcing extensive firefighting. Nearly all stakeholders see the notary process as adding negative value, yet it persists.
English
0
0
0
126
dennis
dennis@dennismuellr·
@paulg that is a populist take, you sign a PoA and that's it. annoying yes, but dealbreaker when you love a founder?
English
4
0
11
3.3K
appughar retweetledi
Mobius Labs
Mobius Labs@Mobius_Labs·
If you’re at @EurIPSConf this week and looking for your next ML role, ping us. Few of us from @Dropbox including @mobicham @appughar will be there Happy to chat about roles in multimodal, search, and video-AI.
Mobius Labs@Mobius_Labs

Looking for people with experience in : Building and scaling multimodal ML systems, Retrieval, ranking, and creative search problems, Turning prototypes into polished products 🌍 Remote (Germany or Denmark) Apply here: jobs.dropbox.com/listing/7384405 (Please use the link, not DMs!)

English
0
3
4
618
appughar
appughar@appughar·
After an incredible 7-year roller coaster of building @Mobius_Labs — a journey filled with both amazing excitement and moments of grief — I’m super thrilled to join @Dropbox along with my fellow Mobius team to bring multimodality to Dropbox Dash.
Mobius Labs@Mobius_Labs

@Dropbox is acquiring our IP, and the Mobius core tech team is joining them. A huge milestone for us! dropbox.tech/machine-learni…

English
0
0
1
125
appughar
appughar@appughar·
A big part of my PhD was on manifold optimization researchgate.net/publication/43…. Back then, training data were only in the 1000s. I never thought I’d hear about the Stiefel manifold again in the era of stochastic gradients—where massive datasets often make simpler, heuristic methods work just fine. Exciting to see these ideas resurface!
Thinking Machines@thinkymachines

Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices. thinkingmachines.ai/blog/modular-m… We explore a fundamental understanding of the geometry of neural network optimization.

English
0
0
0
68
appughar retweetledi
mobicham
mobicham@mobicham·
Well optimized Triton kernels can perform very well end-2-end, even competing with highly optimized kernels like Marlin.
mobicham tweet media
English
2
4
41
5.7K
appughar
appughar@appughar·
@dieworkwear Not arguing against industrialisation or homogenisation. But comparing clothing to visual arts isn’t really apples to apples. Clothing demand is limited to a few hundred items, mostly utilitarian (e.g., daily office wear). Visual art allows far more freedom for unique expressions
English
0
0
11
1.6K
derek guy
derek guy@dieworkwear·
I don't think this is exactly right. My guess is that a lot of AI art will be similar to what has happened in the clothing industry: technology speeds up production, benefitting consumers. Only enthusiasts will care if something is made by hand or not. Will give examples. 🧵
Pork@Eatporkplease1

@raphaelerba I made this with AI to explain that the talented will always have a place in Art. AI Art democratizes creativity, much like digital cameras democratized photography; empowering many, enhancing the few.

English
248
1.9K
25.1K
2.7M
appughar
appughar@appughar·
Personally I am ambivalent of my feed being full of studio ghibli inspired pictures, But hope such tooling will be used to tell stories that can connect. At this moment, it looks like a fad though.
English
0
0
0
37
appughar
appughar@appughar·
That said, with reasoning models and looping through it, it is not unthinkable ( imho quite plausible ) machine intelligence will have self awareness and a sense of empathy derived from their past. What is not clear is how these sense diverges from our experiences.
English
1
0
0
40
appughar
appughar@appughar·
What fascinates me about Miyazaki’s work is how much empathy it stirs. In Totoro, when Satsuki runs searching for her sister—worried, confused, a bit angry—it felt like I was sharing the experience with her. That, to me, is the core aesthetics behind the shot.
appughar tweet media
English
1
0
1
73
Harsimrat
Harsimrat@hssandhawalia·
This has been in the works since late last summer. Glad to get this out of the door. Learning to Drive (L2D) the world's largest opensource multimodal dataset in collaboration with @LeRobotHF team @huggingface
Remi Cadene@RemiCadene

A banger just got released 💥 Here is a snapshot of L2D, the biggest self-driving dataset by far! - 90 TeraBytes of data - 5000 hours of driving - 6 surrounding HD cameras - OPENLY AVAILABLE - Train your car to drive like @Tesla at home 🧵 More details in thread

English
1
2
13
1.5K
appughar
appughar@appughar·
@Thom_Wolf Thanks for writing this! W.r.t. benchmarks, shouldn’t we start attacking unsolved problems across disciplines as targets? From Hilbert’s problems to applied challenges in areas like fusion and cancer research—these are, by definition, out-of-distribution samples.
English
0
0
0
533
Thomas Wolf
Thomas Wolf@Thom_Wolf·
I shared a controversial take the other day at an event and I decided to write it down in a longer format: I’m afraid AI won't give us a "compressed 21st century". The "compressed 21st century" comes from Dario's "Machine of Loving Grace" and if you haven’t read it, you probably should, it’s a noteworthy essay. In a nutshell the paper claims that, over a year or two, we’ll have a "country of Einsteins sitting in a data center”, and it will result in a compressed 21st century during which all the scientific discoveries of the 21st century will happen in the span of only 5-10 years. I read this essay twice. The first time I was totally amazed: AI will change everything in science in 5 years, I thought! A few days later I came back to it and, re-reading it, I realized that much of it seemed like wishful thinking at best. What we'll actually get, in my opinion, is “a country of yes-men on servers” (if we just continue on current trends). Let me explain the difference with a small part of my personal story. I’ve always been a straight-A student. Coming from a small village, I joined the top French engineering school before getting accepted to MIT for PhD. School was always quite easy for me. I could just get where the professor was going, where the exam's creators were taking us and could predict the test questions beforehand. That’s why, when I eventually became a researcher (more specifically a PhD student), I was completely shocked to discover that I was a pretty average, underwhelming, mediocre researcher. While many colleagues around me had interesting ideas, I was constantly hitting a wall. If something was not written in a book I could not invent it unless it was a rather useless variation of a known theory. More annoyingly, I found it very hard to challenge the status-quo, to question what I had learned. I was no Einstein, I was just very good at school. Or maybe even: I was no Einstein in part *because* I was good at school. History is filled with geniuses struggling during their studies. Edison was called "addled" by his teacher. Barbara McClintock got criticized for "weird thinking" before winning a Nobel Prize. Einstein failed his first attempt at the ETH Zurich entrance exam. And the list goes on. The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student. This perspective misses the most crucial aspect of science: the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around. To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise. Just consider the crazy paradigm shift of special relativity and the guts it took to formulate a first axiom like “let’s assume the speed of light is constant in all frames of reference” defying the common sense of these days (and even of today…) Or take CRISPR, generally considered to be an adaptive bacterial immune system since the 80s until, 25 years after its discovery, Jennifer Doudna and Emmanuelle Charpentier proposed to use it for something much broader and general: gene editing, leading to a Nobel prize. This type of realization –"we've known XX does YY for years, but what if we've been wrong about it all along? Or what if we could apply it to the entirely different concept of ZZ instead?” is an example of out-side-of-knowledge thinking –or paradigm shift– which is essentially making the progress of science. Such paradigm shifts happen rarely, maybe 1-2 times a year and are usually awarded Nobel prizes once everybody has taken stock of the impact. However rare they are, I agree with Dario in saying that they take the lion’s share in defining scientific progress over a given century while the rest is mostly noise. Now let’s consider what we’re currently using to benchmark recent AI model intelligence improvement. Some of the most recent AI tests are for instance the grandiosely named "Humanity's Last Exam" or "Frontier Math". They consist of very difficult questions –usually written by PhDs– but with clear, closed-end, answers. These are exactly the kinds of exams where I excelled in my field. These benchmarks test if AI models can find the right answers to a set of questions we already know the answer to. However, real scientific breakthroughs will come not from answering known questions, but from asking challenging new questions and questioning common conceptions and previous ideas. Remember Douglas Adams' Hitchhiker's Guide? The answer is apparently 42, but nobody knows the right question. That's research in a nutshell. In my opinion this is one of the reasons LLMs, while they already have all of humanity's knowledge in memory, haven't generated any new knowledge by connecting previously unrelated facts. They're mostly doing "manifold filling" at the moment - filling in the interpolation gaps between what humans already know, somehow treating knowledge as an intangible fabric of reality. We're currently building very obedient students, not revolutionaries. This is perfect for today’s main goal in the field of creating great assistants and overly compliant helpers. But until we find a way to incentivize them to question their knowledge and propose ideas that potentially go against past training data, they won't give us scientific revolutions yet. If we want scientific breakthroughs, we should probably explore how we’re currently measuring the performance of AI models and move to a measure of knowledge and reasoning able to test if scientific AI models can for instance: - Challenge their own training data knowledge - Take bold counterfactual approaches - Make general proposals based on tiny hints - Ask non-obvious questions that lead to new research paths We don't need an A+ student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed. --- PS: You might be wondering what such a benchmark could look like. Evaluating it could involve testing a model on some recent discovery it should not know yet (a modern equivalent of special relativity) and explore how the model might start asking the right questions on a topic it has no exposure to the answers or conceptual framework of. This is challenging because most models are trained on virtually all human knowledge available today but it seems essential if we want to benchmark these behaviors. Overall this is really an open question and I’ll be happy to hear your insightful thoughts.
English
276
494
2.5K
409.9K
Ramzi Rizk
Ramzi Rizk@ramz·
Many languages use the same the word for "orange" (fruit & color). Orange comes from the Portuguese "laranja", derived from the Persian "nārang". However, when it was exported east, the word became "porto'kali" (Greek) and "burtuqal" (Arabic). Again for both fruit & color.
English
2
0
0
71
Ramzi Rizk
Ramzi Rizk@ramz·
A galaktoboureko is a decadent cream and pastry desert. We found the name funny, thinking it was a superlative. Like Galactic. Except it wasn’t. Gala is greek for milk. So the word simply meant milk burek. Except: Galaxy also comes from greek: galaxias kyklos. The milky way.
English
1
0
2
169