Marco Faella

737 posts

Marco Faella banner
Marco Faella

Marco Faella

@m_faella

Computer Science professor at U. of Naples (Italy). Author of 'Seriously Good Software'. #java #gamedev

Napoli Katılım Mart 2014
420 Takip Edilen448 Takipçiler
Marco Faella retweetledi
signüll
signüll@signulll·
the craziest part now is that the modern computer probably has to be entirely reinvented, from scratch. pretty much like how jobs & co brought apple ii to market. like not improved. not given a chatbot sidebar or something but really from the ground up like the iphone redefined what it meant to be a pocket computer. the current paradigm for computers was built around a human staring at a screen, moving a cursor, opening apps, managing windows, naming files, remembering where things live, & manually translating intent into interface actions. that made sense when the human was the runtime. but in an ai native world, it starts to look kinda ridiculous. you can see this ridiculousness when you use computer use agents… they are useful sure, but they’re also obviously transitional. they’re teaching ai to operate machines designed for humans, which is clever, but also kind of absurd. it’s like making a robot hand so it can use a doorknob instead of asking why the door needs a knob at all. yes i know humans also need to use a door knob, but maybe in the future humans don’t need to use a computer, or at least what we think of a computer today at all. this all leads to some interesting questions: - what is a file when the system understands context? - what is an app when intent can route itself? - what is a desktop when work can be decomposed, executed, monitored, & summarized by agents? - what is a browser when the agent can retrieve, compare, transact, & remember? - what is an operating system when the primary user is no longer just a person, but a person plus a swarm of delegated intelligences? or no person at all. the old computer assumed navigation. the new computer has to assume a new kind of intention. the old computer organized information. the new computer has to try to organize agency. we’re still in the hacky middle stage at the moment with sidebars, copilots, agents clicking through legacy ui, & automation layers sitting on top of 40 year old metaphors. the new computer is likely one where memory, context, identity, permissions, tools, agents, & interfaces are native primitives. this means desktop, mobile, browser, apps, files, folders deserves another first principles look.
English
372
674
6.6K
566.8K
Marco Faella
Marco Faella@m_faella·
We should train an LLM on pre-1905 texts and see if it discovers relativity.
English
0
0
0
57
Marco Faella retweetledi
Paata Ivanisvili
Paata Ivanisvili@PI010101·
AI is clearly assisting in solving some math problems and accelerating parts of mathematical research--and experiencing this from the inside is emotionally nontrivial. - in the last ~25 days I essentially ended up with 3 research projects. The ideas are there; the real bottleneck is sitting down and writing clean, arXiv-ready exposition. I’m increasingly tempted to use AI for drafting text, but checking and verifying every line still takes serious time. This temptation keeps growing, and at times the whole loop of testing / correcting / verifying honestly makes me feel a bit nauseous (too much information to digest 🙂) - I also notice that PhD students who actively use AI tend to move noticeably faster than those who are pessimistic or dismissive of the technology. This is not an advertisement for paying hundreds of dollars per month for frontier models--but it is a reminder to stay open-minded, curious, and willing to try new tools rather than reject them a priori. - the math community (and journals) seems to be split into three camps: (A) those who are against, (B) those who are watching, and (C) those who genuinely like and use these tools. The pressure from camp (A) is real. For example, after my tweet about a new Bellman function went viral, within a couple of days a related my paper (which had been under review for months) was rejected. Within a few more days, a fellowship application was also rejected. My reaction? I don’t care. - there is also a widespread misconception: if AI helped solve your problem (in whatever sense), then you must have been working on a trivial problem or doing “wrong” research. This is simply false. I should admit I once believed something similar myself--until I got punched in the face by reality after seeing AI make progress on genuine Frontiermath, tier-4 math problems. Whether AI helps depends on many variables: what is in the training data, how popular the topic is, whether the solution transfers from another field, and much more. -importantly, my public examples where AI made progress do not mean AI can “do math.” In my experience, AI fails on serious math attempts most of the time. Claims that AI will “solve math in the next 1–2 years” sound like nonsense to me. The easiest(!) hard problem I once challenged AI with--a public tier-4 problem about BMO spaces ( epoch.ai/frontiermath/t… )--is still completely untouched, even though the solution is available. My honest “wow” moment will be when AI starts making progress there. Until then, I’m calmly sitting, eating popcorn, and watching how long that problem keeps beating AI 🍿 -finally, testing math problems on AI is fascinating. Sometimes it feels like talking to a person who traveled back from the future and knows an enormous amount of information. Whatever open question or vague idea you have--try testing it. You can probe multiple problems in parallel, which is something fundamentally unusual for the human brain.
English
25
43
272
25.5K
Marco Faella
Marco Faella@m_faella·
Experimenting with AI-assisted paper reviewing. I write the review and then ask: "Evaluate the attached review for the attached paper and judge whether it is fair and constructive. Check its grammar and spelling, and if it misses important issues in the paper. Target venue is..."
English
0
0
1
46
Tyler Bosmeny
Tyler Bosmeny@bosmeny·
@sethbannon I love it, except for the "fighting back" framing. Oral exams might be a better way to do exams full stop! Except totally impractical pre-AI.
English
5
3
82
26.2K
Seth Bannon
Seth Bannon@sethbannon·
This is brilliant. A professor noticed take home assignments coming back suspiciously good. Like McKinsey memos. So he started cold calling the students asking why they made certain choices in their submissions. They couldn't explain even basic choices! Clear copy/past from LLMs. So he fought AI with AI -- an oral final exam run by a voice agent and evaluated by a council of LLM graders. > 36 students examined in 9 days > ~25 min avg per exam > $15 total all‑in (≈ $0.42/student) > Full transcripts, audit trail, and super actionable feedback This works because you can paste into ChatGPT and copy the output, but you can’t fake coherent, real‑time reasoning about your project when someone keeps drilling. Interesting that the LLM grading committee actually converged after deliberation and exposed a teaching gap (A/B testing was the weak spot across the class). Students using AI killed take home exams. Very clever to fight fire with fire and use AI to bring back oral exams. Perhaps not surprising, only 13% of students preferred the AI oral format 😂 Oral exams used to be the gold standard in education but were replaced by more scalable written exams. With AI, oral exams are scalable again. Will be interesting to see how this changes education.
English
72
264
2.1K
381.1K
Marco Faella retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
English
2.6K
7.5K
55.9K
16.8M
Marco Faella retweetledi
roon
roon@tszzl·
the primary criticism of AI you hear has nothing to do with water use or existential risk whatsoever: most people just think it’s fake and doesn’t work and is a tremendous bubble eating intellectual property while emitting useless slop along the way. when GPT-5 came out and perhaps didn’t live up to what people were expecting for a full version bump, the timeline reaction was not mild, it was a full-scale meltdown. there are many intelligent (and unintelligent) people who latched onto this moment to declare AI scaling over, thousands of viral tweets, still a prevailing view in many circles. The financial-cultural phenomenon of machine intelligence is one of the most powerful in decades, and there are a lot of people who would like for its position to be weakened, many outright celebrating its losses and setback. Michael burry of ‘Big Short’ fame, unfortunately the type of guy to predict 12 of the last 3 recessions, has bet himself into insolvency on the AI bubble’s collapse one of the stranger things about this time is that there are very few secrets, and very little reason to be so misinformed. model labs have very little space in between creating new capabilities and launching them to the public. The view among the well informed public and not just “lab insiders” is that machine intelligence is absurdly joyfully smart at so many new things every month. It’s actively contributing on the cutting edge of programming and math and science. Sebastian Bubeck and co’s recent paper reports that GPT5-pro is capable of producing results on the frontier of theoretical physics research, Terry Tao wrote a blog about “vibe-proving” Erdos problems with the auto-formalization AI Aristotle. You can read that these scientists are using it to actively contribute to black hole physics, tighten mathematical bounds in optimization theory, churning morasses of biomedical data into real insight. Google Deepmind, from the way they are signalling, seems to be slowly closing a dragnet around the Navier-Stokes smoothness millennium problem (though of course, I don’t know). Several companies stocked top to bottom with brilliant scientists are racing to build pipelines to solve novel physics and chemistry and biology You can read online about the new kinds of organizations being born around machine intelligence as a first class factor of production. For the first time, the new factor actually gives you ideas for improving the processes themselves. It’s designing whole assembly lines where some of the workers on the assembly line are also AIs, and the line itself is morphing and self-optimizing. Tiny teams are producing amounts of work that seemed impossible to organizations of a few years ago. It’s hard not to feel excited by the productivity growth happening in these admittedly narrow software sectors. Every time I use codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough about some intellectual pursuit to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying. You can feel the unburdening of Atlas, the takeoff. It feels more prosaic and less poetic than it did in 2023, even though the results speak for themselves more loudly
English
345
406
4.1K
836.8K
Marco Faella retweetledi
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
SQLite has about 155,800 lines of code, and its test suite has roughly 92 million lines. That is ~590x more test code than actual code 🤯 This is the level of testing you need for a real production database. Here are some types of tests they run. Out-of-memory tests - SQLite cannot just crash when memory runs out. On embedded devices, OOM errors are common. They simulate malloc failures at every possible point and verify that the database handles them gracefully. I/O error tests - Disks fail. Networks drop. Permissions change mid-operation. SQLite inserts a custom file system layer that can simulate failures after N operations, then verifies that no corruption occurs. Crash tests - What happens if power cuts out mid-write? They simulate crashes at random points during writes, corrupt the unsynchronized data to mimic real filesystem behavior, then verify the database either completed the transaction or rolled it back cleanly. No corruption allowed. Fuzz testing - They throw malformed SQL, corrupted database files, and random garbage at SQLite. The dbsqlfuzz tool runs about 500 million test mutations every day across 16 cores. 100% branch coverage - Every single branch instruction in SQLite's core is tested in both directions. Not just 'did this line run', but 'did this condition evaluate to both true AND false'. Databases are really unforgiving :) By the way, if you want to go deeper, I recommend reading the official SQLite documentation on their testing strategy. The doc is pretty practical and deep. Have linked it below.
English
98
522
6.1K
527.8K
Marco Faella retweetledi
Computer Science
Computer Science@CompSciFact·
The C code below compiles and prints "hello, world".
Computer Science tweet media
English
105
285
2.3K
700.6K
Marco Faella retweetledi
Emmanuel Tsekleves
Emmanuel Tsekleves@PhDtoProf·
Last month a PhD student of a colleague of mine almost got expelled for using AI. A reviewer flagged her literature review for AI detection. Journal editor threatened rejection and report. She didn't know which AI tools were safe. Neither did he. So I tested 12 literature review AI tools.
Emmanuel Tsekleves tweet media
English
17
168
816
81.5K
Marco Faella
Marco Faella@m_faella·
"The successful inventor asks where we can get from here, rather than how we can get there". From Why greatness cannot be planned. Food for thought... @kenneth0stanley
English
0
0
1
89
Marco Faella retweetledi
Timothy Gowers @wtgowers
Timothy Gowers @wtgowers@wtgowers·
I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me. 1/3
English
62
302
2.5K
892.3K
Marco Faella
Marco Faella@m_faella·
Having fun with Python and NaN: x = np.full(10,np.nan) # an array full of NaN print(x[2] is np.nan) # False! y = np.nan print(y is np.nan) # True! (Solution: np.isnan)
English
0
0
0
46
Marco Faella retweetledi
Alex Kuleshov
Alex Kuleshov@0xAX·
I have finished to adjust the 6th (and the last) part about the Linux kernel booting process for modern kernels - github.com/0xAX/linux-ins…
English
0
49
299
13.7K
Marco Faella retweetledi
Julian Schrittwieser
Julian Schrittwieser@Mononofu·
As a researcher at a frontier lab I’m often surprised by how unaware of current AI progress public discussions are. I wrote a post to summarize studies of recent progress, and what we should expect in the next 1-2 years: julian.ac/blog/2025/09/2…
English
220
807
5.9K
2M
Marco Faella retweetledi
Asimov Press
Asimov Press@AsimovPress·
Is the cell a computer? Here's what Michael Elowitz ( @ElowitzLab ) had to say in a prior interview: "The computer analogy is a double-edged sword. The analogy is useful in the sense that a cell is a programmable device that can do many different things. And the closest thing we have like that, in our daily life, is the computer. You can program your computer to do all kinds of things. And I think that's also true with the cell; you can program the cell to grow, divide, change morphology, interact with other cells, and do all kinds of things that are difficult for us to imagine. Cells are open-ended, programmable systems. We can even program cells to carry out functions that cells did not naturally evolve to do. So from that point of view, the computer analogy seems quite useful and accurate, actually. But there's also a lot of ways in which a cell is not a computer. And I think those are equally important. Cells are noisy, and they use that noise to control behaviors at the population level. They also make copies of themselves and grow exponentially—computers do not do that. Instead of transistors connected by wires, cells use molecules connected by specific molecular interactions. Another one is negative numbers, right? You can't have a negative concentration of a molecule in biology, which means that biology has to solve problems in unique ways. There’s also combinatoriality, in which biological systems encode signals in combinations of molecules that compete to form different complexes. These systems compute in ways that resemble digital computation in some ways and differ in others. So the thing that makes me nervous about the metaphor is when we start to impose our electrical engineering expectations on the mysterious world inside of a cell. There are some principles of electrical engineering that apply to living cells, but the most interesting things about biology are all the ways in which they’re different." (From the Archives) "Synthetic Origins." press.asimov.com/articles/synth…
Asimov Press tweet media
English
8
46
302
24.9K
Marco Faella retweetledi
Percy Liang
Percy Liang@percyliang·
Wrapped up Stanford CS336 (Language Models from Scratch), taught with an amazing team @tatsu_hashimoto @marcelroed @neilbband @rckpudi. Researchers are becoming detached from the technical details of how LMs work. In CS336, we try to fix that by having students build everything:
English
46
573
5K
678.1K
Marco Faella retweetledi
daren
daren@darengb·
On the Origin of Species actually cited Empedocles as the first source for a theory of natural selection in in the 5th century BC but Darwin was the first to prove it. Unfortunately his texts are gone and we only have his theory explained by Aristotle: “Empedocles says that many things in nature have arisen spontaneously, like the teeth: at first the front teeth grew sharp, suited for tearing, and the molars broad and useful for grinding food; and so with other parts. Whenever they happened to be arranged in a useful way, such things survived, being preserved by their utility; but those that were not so arranged perished and still perish, as Empedocles says.” - Aristotle
English
7
30
616
18.9K