VeeVee ^^
875 posts


@not_jpsenak @gnukeith I thought there could be some documented changes for such a large project
English

I see this question a lot, so Brave stable (red one) is what 99% of our users are using, it just works
Brave Nightly (purple one) is the one I work on, it has experimental features, bunch of cool stuff that we are working on that might eventually end up in stable if the feedback is positive
I do not recommend using Nightly as a daily driver, but you can and if you do, you can tag me and let me know what you think of it ^^
ayesha@ayesha_fatiima
I've never used the Purple Brave Browser, what's it used for ?
English

@def_meditext @luciascarlet I feel ya. I wanted to upgrade to 32 gb cuz of my requirements but was unable to..
English

@C_for_Crazy @luciascarlet Not everyone has enough cash to waste money purchasing expensive accessories or upgrades, people still use 8 GB of RAM including I.
English

@C_for_Crazy @luciascarlet You must've paid crazy cash at the time for saying that. Let me guess, 16 GB DDR4 ram?
English

@luciascarlet @DegesVojta Microsoft teams use webview, not electron.
English

@DegesVojta I’ve (thankfully?) never had to use Microsoft Teams so my condolences
English

@C_for_Crazy Dear consumer, we would like to assist you with your concern.
Please click here:
twitter.com/messages/compo…
English

@C_for_Crazy Dear consumer, we would like to assist you with your concern.
Please click here:
twitter.com/messages/compo…
English

Using GNOME as a default desktop environment for your distro is just asking to filter new users.
It is an objective downgrade in usability for people coming from Windows or Mac.
KDE is the most feature-complete desktop environment. 99% of users will have everything they need to feel comfortable.
English

@mweinbach Shouldn't AI reduce the efforts required to build apps? ai companies promise just that yet i hardly see native apps being built for windows. Same goes for android where ai apps are launched weeks or months later iOS.
English

@cqkten @FirstSquawk i dont think it matters even if it sucks if he gets them to comply
English

@mweinbach I have this installed on my Oneplus 13R (8 gen 3) but no way to actually open the app. Missing app icon.
English
VeeVee ^^ retweetledi

India enters the open-weights AI race with its largest models pre-trained from scratch: Sarvam 105B and Sarvam 30B
@SarvamAI's Sarvam 105B and Sarvam 30B score 18 and 12 on the Artificial Analysis Intelligence Index respectively. Announced at the India AI Impact Summit 2026 and open-sourced under Apache 2.0, both are Mixture-of-Experts models trained entirely in India using compute provided under the IndiaAI Mission (@OfficialINDIAai). Both support reasoning and non-reasoning modes.
These are an improvement from Sarvam's previous model, Sarvam M (8 on Intelligence Index, 23.6B parameters), which was based on Mistral Small rather than pre-trained from scratch. Sarvam 105B has 106B total parameters with ~10B active per token and a 128K context window. Sarvam 30B has 32B total parameters with ~2.4B active per token and a 65K context window. Alongside the text models, Sarvam also announced Saaras v3 (Speech to Text) and Bulbul v3 (Text to Speech) with a focus on Indic languages.
Key takeaways in reasoning mode:
➤ Sarvam 105B scores 18 on the Intelligence Index. Among ~100B-class open-weights reasoning models, it trails GLM-4.5-Air (23), INTELLECT-3 (22), Mistral Small 4 (27), and gpt-oss-120B (High, 33). All four peers also activate more parameters per token
➤ Sarvam 30B scores 12 on the Intelligence Index. Among ~30B-class open-weights reasoning models, it trails GLM-4.7-Flash (30), Nemotron Cascade 2 30B A3B (28), Qwen3 30B A3B 2507 (22), and Qwen3 32B (17). Sarvam 30B activates fewer parameters than these peers.
➤ Sarvam 105B's relative strength is in select agentic tasks. Its agentic index of 25 places it ahead of INTELLECT-3 (20) and GLM-4.5-Air (21) despite trailing both on overall intelligence. Its GDPval index of 773 also edges ahead of GLM-4.5-Air (665). Both new models are a large step up from Sarvam M (Reasoning), which scored 8 on the Intelligence Index.
➤ Compared to peers, both models score lower on TerminalBench Hard (Agentic Coding & Terminal Use) and AA-Omniscience. Sarvam 105B scored 1.5% and Sarvam 30B scored 2.3% on TerminalBench Hard, compared to GLM-4.5-Air (20.5%) and INTELLECT-3 (9.1%). The AA-Omniscience Index is -60 for Sarvam 105B and -72 for Sarvam 30B. Both models have high hallucination rates relative to their accuracy, and both attempt to answer far more questions rather than abstaining, which drives the negative scores.
Key model details:
➤ Modality: Text input and output only.
➤ Context window: 128K tokens (Sarvam 105B) and 65K tokens (Sarvam 30B).
➤ Pricing: Currently free on Sarvam's first-party API.
➤ License: Apache 2.0.
➤ Availability: Sarvam's first-party API; weights available on @huggingface and AIKosh.

English
![PRIZ ;]](https://pbs.twimg.com/profile_images/1815124531935068160/1xbxJQ_q.png)
@tposingluigi @LukasHozda It is still a fork of SDDM, meaning my issue of it just not showing sometimes may still happen in the future.
English

@MedeirosPanther For people who don’t understand, it’s because walking promotes farting. On the phone with her, make sure to bring the phone close to your asscheeks when farting so she can hear how dominant your farts are. Attraction 101.
English

If you're looking to make friends, shared environments like work, dance classes, or other hobby groups are your best bet.
Moss@MossuNoAH
I failed at making friends in college. My dad is worried about it and I genuinely just dont know what to do
English












