Raphaël Gazzotti

255 posts

Raphaël Gazzotti banner
Raphaël Gazzotti

Raphaël Gazzotti

@GazzottiRaphael

Data Expert, PhD in Computer Science | Data Integration & Management, Semantic Web, ML and NLP

Nice Katılım Haziran 2019
46 Takip Edilen41 Takipçiler
Raphaël Gazzotti retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨Breaking: Someone just open sourced a knowledge graph engine for your codebase and it's terrifying how good it is. It's called GitNexus. And it's not a documentation tool. It's a full code intelligence layer that maps every dependency, call chain, and execution flow in your repo -- then plugs directly into Claude Code, Cursor, and Windsurf via MCP. Here's what this thing does autonomously: → Indexes your entire codebase into a graph with Tree-sitter AST parsing → Maps every function call, import, class inheritance, and interface → Groups related code into functional clusters with cohesion scores → Traces execution flows from entry points through full call chains → Runs blast radius analysis before you change a single line → Detects which processes break when you touch a specific function → Renames symbols across 5+ files in one coordinated operation → Generates a full codebase wiki from the knowledge graph automatically Here's the wildest part: Your AI agent edits UserService.validate(). It doesn't know 47 functions depend on its return type. Breaking changes ship. GitNexus pre-computes the entire dependency structure at index time -- so when Claude Code asks "what depends on this?", it gets a complete answer in 1 query instead of 10. Smaller models get full architectural clarity. Even GPT-4o-mini stops breaking call chains. One command to set it up: `npx gitnexus analyze` That's it. MCP registers automatically. Claude Code hooks install themselves. Your AI agent has been coding blind. This fixes that. 9.4K GitHub stars. 1.2K forks. Already trending. 100% Open Source. (Link in the comments)
Sukh Sroay tweet media
English
125
527
4.5K
461.5K
Raphaël Gazzotti retweetledi
Google Labs
Google Labs@GoogleLabs·
Today, we’re introducing Pomelli’s latest feature update, ‘Photoshoot’ With Photoshoot, you can start from a single image of your product and easily create high quality, customized product shots to elevate your marketing. Available free of charge in the US, Canada, Australia & New Zealand! Get started with Pomelli today at labs.google/pomelli
English
1.2K
4.7K
49.9K
24.1M
Raphaël Gazzotti retweetledi
Tristan
Tristan@Tristan0x·
This aired tonight to 1 billion people in China. A year ago these robots could barely wave a handkerchief, now they can do backflips and kung fu with nunchucks. Physical intelligence is the next frontier.
English
2.4K
6K
35.3K
6.6M
Raphaël Gazzotti retweetledi
EBRAINS
EBRAINS@EBRAINS_eu·
The poster session at the #EBRAINSSummit2025 is happening now! Join us in the exhibition hall to discover posters about data curation, EBRAINS National Nodes, brain atlases, and more!
EBRAINS tweet media
English
0
1
1
209
Raphaël Gazzotti retweetledi
EBRAINS
EBRAINS@EBRAINS_eu·
🏆 At the #EBRAINSSummit2025 we celebrate the work of researchers across the EBRAINS network with the Best Abstract Awards! The 3 winners are: 🔬 For neuroscientific & medical resources: 
• Layer-specific cell counts in BigBrain, Sebastian Bludau & Timo Dicksheid @fz_juelich 🤖 For Technology & AI:
• Bids2ebrains, Renqing Cuomao @EPFL_en 
• The Virtual Aging Brain, Amirhossein Esmaeili @uniamu
EBRAINS tweet mediaEBRAINS tweet mediaEBRAINS tweet mediaEBRAINS tweet media
English
0
1
2
289
Raphaël Gazzotti retweetledi
Jingna Zhang
Jingna Zhang@zemotion·
Cara grew from 40k to 650k users in a week because artists are fed up with Meta's AI policies We're 700k now! - techcrunch.com/2024/06/06/a-s… 1/
Jingna Zhang tweet media
English
147
2.2K
11.8K
724.1K
Raphaël Gazzotti retweetledi
OpenAI
OpenAI@OpenAI·
Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. openai.com/sora Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”
English
8.9K
29.7K
130K
98.1M
Raphaël Gazzotti retweetledi
near
near@nearcyan·
Google has already started forcing Web Integrity into Chromium despite it being a 'proposal' With WEI, users can be denied access for using non-approved browsers or hardware The open Internet is officially dead as soon as this is commonly implemented
near tweet media
English
186
2K
8.4K
1.8M
Raphaël Gazzotti retweetledi
Itamar Golan 🤓
Itamar Golan 🤓@ItakGol·
I can't believe I've just fine-tuned a 33B-parameter LLM on Google Colab in a few hours.😱 Insane announcement for any of you using open-source LLMs on normal GPUs! 🤯 A new paper has been released, QLoRA, which is nothing short of game-changing for the ability to train and fine-tune LLMs on consumers' GPUs. In a few words: QLoRA reduces the memory usage of LLM fine-tuning without any performance tradeoffs compared to standard 16-bit model fine-tuning. This method enables 33B model fine-tuning on a single 24GB GPU and 65B model fine-tuning on a single 46GB GPU. This is incredible! 😍 More specifically, QLoRA uses 4-bit quantization to compress a pre-trained language model. The LM parameters are then frozen, and a relatively small number of trainable parameters are added to the model in the form of Low-Rank Adapters. During finetuning, QLoRA backpropagates gradients through the frozen 4-bit quantized pretrained language model into the Low-Rank Adapters. The LoRA layers are the only parameters being updated during training. Read more about LoRA in the original LoRA paper (arxiv.org/abs/2106.09685). 🤓 QLoRA has one storage data type (usually 4-bit NormalFloat) for the base model weights and a computation data type (16-bit BrainFloat) used to perform computations. QLoRA dequantizes weights from the storage data type to the computation data type to perform the forward and backward passes, but only computes weight gradients for the LoRA parameters, which use 16-bit bfloat. The weights are decompressed only when they are needed, therefore the memory usage stays low during training and inference. Beautiful!😱 QLoRA tuning is shown to match 16-bit finetuning methods in a wide range of experiments. In addition, the Guanaco models, which use QLoRA finetuning for LLaMA models on the OpenAssistant dataset (OASST1), are state-of-the-art chatbot systems and are close to ChatGPT on the Vicuna benchmark. This is an additional demonstration of the power of QLoRA tuning. Their Guanaco models are reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of fine-tuning on a single GPU. You can actually do it in Google Colab. 📚 Links- QLoRA Paper - arxiv.org/pdf/2305.14314… Colab for inference - colab.research.google.com/drive/1ge2F1QS… Colab for fine-tuning - colab.research.google.com/drive/1VoYNfYD… GitHub Repository- github.com/artidoro/qlora Use it with HuggingFace - huggingface.co/blog/4bit-tran…
Itamar Golan 🤓 tweet mediaItamar Golan 🤓 tweet media
English
105
934
4.8K
1.8M
Raphaël Gazzotti retweetledi
Santiago
Santiago@svpino·
We integrated ChatGPT with our robots. We had a ton of fun building this! Read on for the details:
English
291
1.3K
5.5K
1.3M
Raphaël Gazzotti retweetledi
Santiago
Santiago@svpino·
She is not real. Generative AI is mindblowing, but there's a catch: Anything serious will cost you an arm and a leg. But there's a solution that will save your wallet:
Santiago tweet media
English
28
119
1.1K
438K
Raphaël Gazzotti retweetledi
Dara Bahri
Dara Bahri@dara_bahri·
Please RT We're hiring a student researcher here at Google Research. If you are a PhD student interested in language modeling and have a strong background in stats, feel free to reach out via email. careers.google.com/jobs/results/1…
English
2
42
117
22K
Raphaël Gazzotti retweetledi
Alex Xu
Alex Xu@alexxubyte·
/1 How Discord Stores Trillions Of Messages The diagram below shows the evolution of message storage at Discord: MongoDB ➡️ Cassandar ➡️ ScyllaDB
Alex Xu tweet media
English
50
519
3.4K
482K
Raphaël Gazzotti retweetledi
Sebastian S. Cocioba🪄🌷
Sebastian S. Cocioba🪄🌷@ATinyGreenCell·
A search engine that scrapes Sci-Hub, Library Genesis, and other shadow libraries. Hoist the colors! 🏴‍☠️📚 annas-archive.org
English
6
50
309
40.5K
Raphaël Gazzotti retweetledi
Yann LeCun
Yann LeCun@ylecun·
Before we reach Human-Level AI (HLAI), we will have to reach Cat-Level & Dog-Level AI. We are nowhere near that. We are still missing something big. LLM's linguistic abilities notwithstanding. A house cat has way more common sense and understanding of the world than any LLM.
English
389
573
3.9K
863.1K
Raphaël Gazzotti
Raphaël Gazzotti@GazzottiRaphael·
@SW_Journal Is there any news about this maintenance? It's been several days, and error messages are also popping up.
Raphaël Gazzotti tweet media
English
0
0
0
0