Marek Varga

2.5K posts

Marek Varga

Marek Varga

@aemarcuss

i create web applications. i tweet about technology, politics, business and pics of cats :)

Slovak Republic Katılım Eylül 2011
332 Takip Edilen152 Takipçiler
Marek Varga
Marek Varga@aemarcuss·
@zdenekkubik72 @VojtaMT kedze sa rozviedli v roku 1979 je nepravdepodobne ze ziju spolu alebo ze niekto z nich ma ipad na ktory sa prihlasoval ten druhy
1
0
0
9
Czech Made Man
Czech Made Man@zdenekkubik72·
@VojtaMT Co kdyz to psal Elonuv otec, matka bya prihlasena a on si toho nevsmil, to mi prijde occamovo britce jednodussi vysvetleni
Čeština
2
0
1
201
Marek Varga
Marek Varga@aemarcuss·
@levelsio beauty of having money is .. that you can make things happen. if tou want air recouperation ventilation, even tho its not standard you can have it installed.
English
0
0
0
15
@levelsio
@levelsio@levelsio·
So a few things: only until recently European homes didn't have AC installed, not just no HVAC, barely any AC at all Even now it's very low, about 20% of European homes have AC Our house has AC in every room though, but just to cool, they do nothing for air treatment like removing CO2, setting humidity right, getting in fresh air, etc But no HVAC, no air tubes, no central air, etc, I haven't see any European house that has that, best we have is an air vent in bathrooms that goes to the roof to get humidity out a bit Of course Europeans just open the window, which is great, I live near the ocean, but at night there's noise sadly, barking dogs, garbage trucks at 6am, it goes through ear plugs, I'm not complaining but an HVAC system would fix the CO2 and get fresh air in without the noise And sure in office buildings in Europe but not homes, we don't have air treatment like HVAC, no ducts, nothing!
Tom Schmidt >|<@tomhschmidt

The European reinvents HVAC from first principles

English
117
3
343
110.8K
@levelsio
@levelsio@levelsio·
I still haven't solved the CO2 bedroom challenge You open the window and you wake up from a 6am garbage truck or barking dogs and sunlight You close it, you suffocate in 1200 ppl at 5am I guess you really need some mini tube in your wall with a vent that opens and closed based on internal CO2 but how do I build that?
@levelsio tweet media
English
2.4K
81
4.4K
2.2M
Marek Varga
Marek Varga@aemarcuss·
@synopsi @levelsio but it does not exchange air, does it? i installed full house air recouperation system which runs 24/7 - always good co2 and thanks to heat exchange system i almost dont have to use ac.
English
1
0
0
25
Rasty Turek
Rasty Turek@synopsi·
@levelsio I run ventilator on my AC (so not the AC itself) 24/7 for this exact reason.
English
2
0
5
1.1K
Marek Varga
Marek Varga@aemarcuss·
@michalillich u nas to niesu az tak vysoke cisla v realnych hodnotach ale s prehladom setrime 80% costov. v dobe nonstop botov atd je skalovanie serverov webov uplne nerealne cize vykon treba stale
0
0
1
287
Rasty Turek
Rasty Turek@synopsi·
Let's see how many people lived through this era. Finish this without searching for it: FCKGW
English
3
0
6
1.6K
Marek Varga
Marek Varga@aemarcuss·
@synopsi i do not mean body weight. either heavy vest or belt with plate carrier for training, then records without them
GIF
English
0
0
0
10
Rasty Turek
Rasty Turek@synopsi·
@aemarcuss But in all seriousness, I do many other exercises. This is just the punchline at the end. Adding weight is something I have considered. Just not looking forward to it.
English
1
0
0
30
Marek Varga
Marek Varga@aemarcuss·
@synopsi @thisistamhn i would suggest adding more weight. it will help muscle growth which will help with endurance. or different excercises which will help “secondary” muscles to grow (ones which you are using “just a bit” on pull ups)
English
1
0
0
43
Rasty Turek
Rasty Turek@synopsi·
@thisistamhn Doing my best. But can’t push past it. Always break at exhaustion and can’t figure out how to add to it
English
2
0
0
327
Marek Varga retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
New art project. Train and inference GPT in 243 lines of pure, dependency-free Python. This is the *full* algorithmic content of what is needed. Everything else is just for efficiency. I cannot simplify this any further. gist.github.com/karpathy/8627f…
English
651
3.1K
25.1K
5.2M
Marek Varga
Marek Varga@aemarcuss·
@Oblivious9021 block india and make sure blocking page is cdn. they get 20ms response and i get less hacking attempts win win
English
0
0
0
37
Shreya
Shreya@Oblivious9021·
Interviewer: Your page loads in 80 ms in Australia but 600 ms in India. Same backend. Same code. What would you use to fix this?
English
582
171
7.6K
1.9M
Marek Varga retweetledi
sisyphus bar and grill
sisyphus bar and grill@itunpredictable·
@cedar_db is incredibly cool and more people should know about it. They’re a team of PhDs in Munich building a new relational database, on top of almost 10 years of academic research, that crushes existing benchmarks and maybe (finally?) gets us to the HTAP grail. The core idea is that existing RDBMSes like MySQL and Postgres were built more than 30 years ago, on assumptions about hardware constraints that are just not true anymore. These ecosystems have evolved admirably but ultimately…it’s a database. It’s built not to change very much. Here are a few of the ways that CedarDB is rethinking every element of the database: 1) A better query optimizer In the last 30 years we’ve made a lot of progress on how to optimize SQL queries, to the point where an optimized query can easily outperform a not-so-optimized query by a ton. But not many query optimization improvements have made the leap from research into databases today. CedarDB did a few things on this front: Implemented the unnesting algorithm developed by Thomas Neumann (one of the leaders of the Umbra research project CedarDB came from) — an improvement of more than 1000x Developed a novel approach to join ordering using adaptive optimization that can handle 5K+ relations Created a statistics subsystem that tells the optimizer things that traditional databases can’t 2) What if your database was actually a compiler? CedarDB doesn’t interpret queries, it instead generates code. For every SQL query that a user writes, CedarDB processes, optimizes it, and generates machine code that the CPU can directly execute. This has been a holy grail for a while, and they implemented it via a custom low-level language that is cheap to convert into machine code via a custom assembler. Another way that CedarDB improves performance is through Adaptive Query Execution. Essentially they start executing each query immediately with a “quick and dirty” version, while working on better versions in the background. 3) Taking advantage of all cores / Ahmdal’s law Distributing fairly between all available cores is notoriously difficult, and the CedarDB team would argue that most databases underutilize their hardware. Their clever approach to this problem is called morsel-driven parallelism. CedarDB breaks down queries into segments: pipelines of self-contained operations. Then, data is divided into “morsels” per segment – small input data chunks containing roughly ~100K tuples each. You can read more in the original paper here: db.in.tum.de/~leis/papers/m… 4) Rethinking the buffer manager Modern systems come equipped with massive amounts of RAM; there’s actually much more “room at the club” than database developers initially assumed. So the idea of the revamped buffer manager in CedarDB is that you can (and should) expect variance not just in data access patterns, but in storage speed and location, page sizes and data organization, and memory hierarchy. CedarDB’s buffer manager is designed from the ground up to work in a heavily multi-threaded environment. It decentralizes buffer management with Pointer Swizzling: Each pointer (memory address) knows whether its data is in memory or on disk, eliminating the global lock that throttles traditional buffer managers. 5) Building a database for change Databases are built to not change. It’s exactly this stability that gives each generation the confidence to build their apps (no matter how different they are) on systems like Postgres. You know what you’re getting. But there’s also a clear downside to this rigidity. CedarDB’s storage class system employs pluggable interfaces where adding new storage types doesn’t require rewriting other components. E.g. if CXL becomes the go-to storage interface at some point in the future, you don’t need to write another whole component, you just need another endpoint for the buffer manager. Anyway these are just a few of the ideas they’re bringing to the table. Maybe it’s because they’re in Germany, maybe it’s because they’re just really humble, but more people should know about this team!! Check out the full post here: amplifypartners.com/blog-posts/the…
English
28
71
681
287.3K
Marek Varga retweetledi
Wildminder
Wildminder@wildmindai·
Not a Qwen-TTS, but the small and fast LuxTTS delivers SOTA voice cloning from just 1GB VRAM; - >150x realtime speed, - outputs crisp 48kHz audio, and matches models 10x its size. github.com/ysharma3501/Lu…
English
11
64
507
25.4K
Marek Varga retweetledi
Hugging Models
Hugging Models@HuggingModels·
NVIDIA just dropped PersonaPlex-7B 🤯 A full-duplex voice model that listens and talks at the same time. No pauses. No turn-taking. Real conversation. 100% open source. Free. Voice AI just leveled up. huggingface.co/nvidia/persona…
English
151
1.1K
9.6K
2.6M
Marek Varga retweetledi
Lior Alexander
Lior Alexander@LiorOnAI·
You can now run 70B LLMs on a 4GB GPU. AirLLM just made massive models usable on low-memory hardware. 𝗪𝗵𝗮𝘁 𝗷𝘂𝘀𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝗲𝗱 AirLLM released memory-optimized inference for large language models. It runs 70B models on 4GB VRAM. It can even run 405B Llama 3.1 on 8GB VRAM. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 AirLLM loads models one layer at a time. Instead of loading everything: → Load a layer → Run computation → Free memory → Load the next layer This keeps GPU memory usage extremely low. 𝗞𝗲𝘆 𝗱𝗲𝘁𝗮𝗶𝗹𝘀 • No quantization required by default • Optional 4-bit or 8-bit weight compression • Same API as Hugging Face Transformers • Supports CPU and GPU inference • Works on Linux and macOS Apple Silicon 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝗰𝗮𝗻 𝗱𝗼 • Run Llama, Qwen, Mistral, Mixtral locally • Test large models without cloud GPUs • Prototype agents on cheap hardware
Lior Alexander tweet media
English
365
1.2K
11.2K
636.4K
Marek Varga retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
clone any voice with a 5-second audio clip. VoxCPM is an open-source project that takes a fundamentally different approach to text-to-speech. most TTS systems convert speech into discrete tokens. this creates a bottleneck that limits how natural the output can sound. VoxCPM skips tokenization entirely. it models audio in continuous space using an end-to-end diffusion autoregressive architecture. the result is speech that actually sounds human. here's what makes it special: > context-aware generation: it reads your text and infers the right prosody, emotion, and pacing automatically. no manual tuning required. > zero-shot voice cloning: give it a short audio clip, and it captures not just the voice, but accent, rhythm, and emotional tone. the model was trained on 1.8 million hours of bilingual data (English and Chinese) - supports streaming synthesis - works with both full fine-tuning and LoRA - simple Python API: `pip install voxcpm` VoxCPM1.5 runs at 44.1kHz sampling rate with 800M parameters. so this is noticeably crisper and more natural. it's Apache-2.0 licensed, so you can actually use it in production. link to the GitHub repo in the next tweet.
Akshay 🚀 tweet media
English
37
223
1.4K
80.5K
Marek Varga retweetledi
Michal Bláha
Michal Bláha@michalblaha·
Můžete být nejlepší na světě, posunout celou vertikálu o mnoho dál, ale pak přijde Google a uveřejní jeden menší model. RIP DeepL ollama.com/library/transl…
Čeština
15
8
178
27.9K
Marek Varga retweetledi
Lior Alexander
Lior Alexander@LiorOnAI·
You can now clone a human voice in real time without tokenization. OpenBMB just open sourced VoxCPM weights with real time streaming and LoRA fine tuning. It runs at ~0.15 real time factor on a single RTX 4090. 𝗧𝗵𝗶𝘀 𝗿𝗲𝗺𝗼𝘃𝗲𝘀 𝘁𝗼𝗸𝗲𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗳𝗿𝗼𝗺 𝗧𝗧𝗦 Instead of mapping audio to discrete tokens, it generates continuous speech directly. That avoids token artifacts and preserves prosody, pacing, and emotion. • End to end diffusion autoregressive generation • Continuous acoustic representations • No phoneme or codec token bottlenecks 𝗜𝘁 𝗰𝗹𝗼𝗻𝗲𝘀 𝘃𝗼𝗶𝗰𝗲𝘀 𝗳𝗿𝗼𝗺 𝘀𝗲𝗰𝗼𝗻𝗱𝘀 𝗼𝗳 𝗮𝘂𝗱𝗶𝗼 A short reference clip is enough. Accent, rhythm, tone, and timing carry over. • Zero shot voice cloning • No speaker specific training • Works in streaming mode 𝗜𝘁 𝗿𝘂𝗻𝘀 𝗳𝗮𝘀𝘁 𝗮𝗻𝗱 𝗶𝘀 𝘁𝘂𝗻𝗮𝗯𝗹𝗲 Streaming works chunk by chunk with sub second latency. LoRA fine tuning lets you adapt voices without full retraining.
Lior Alexander tweet media
English
45
390
3K
144.9K
Marek Varga retweetledi
Noah Frydberg | Tiktok Shop For Brands
Nano Banana + Fastmoss + Manus + Veo3 = AI Content Factory We built a fully automated system that repurposes, localizes, and launches winning TikTok Shop content across hundreds of creator-style accounts. It’s so effective it feels like running Facebook ads in 2008. - CPMs as low as $0.10 - no reliance on paid ads - no ghost creators - no wasted samples - no lost time My $300/monthly tech stack which replaced $50k+ budget: - manus for product research and viral script ideas - cruva / Fastmoss for recently viral content ideas from competitors - nano banana pro for images - kling 2.6 for video - now using my own phone posting network for automated posting Here’s how it works: •Each AI Agent spins up a TikTok Shop–ready profile, built to sell my products through shoppable videos. •Agents are prompted to research the niche, scrape winning TikTok Shop videos, and rebuild them with new hooks, angles, and UGC-style visuals tailored to your brand. •They create and post daily using my tech stack onto affiliate accounts No touchpoints. No delays. Just shoppable videos going live and GMV compounding every week. Then we use an MPS (Multi-Platform Swarm) approach: once the concept works on TikTok Shop, we deploy hundreds of AI Agents to flood the niche with variations that all drive back to your Shop and Amazon listing. I’m giving you access to the full stack — the ai workflow, ready to plug into your TikTok Shop today. Comment “Workflow” and I’ll send you everything. (must be connected) PS – Repost for early access to the full TikTok Shop content factory system.
Noah Frydberg | Tiktok Shop For Brands tweet media
English
1.8K
573
5.2K
1.2M