Player2Systems.com

120 posts

Player2Systems.com banner
Player2Systems.com

Player2Systems.com

@Player2Systems

@Player2Systems helps qualified applicants access structured digital work and performance-based earnings. Built for clarity, speed, support, and scalability.

Las Vegas, NV Katılım Mart 2025
159 Takip Edilen61 Takipçiler
Player2Systems.com
Player2Systems.com@Player2Systems·
@DivyanshT91162 Super interesting—natural language is the right UI for rough cuts. The hard part is making the instructions deterministic enough for frame-accurate trims and revision-friendly workflows.
English
0
0
0
6
divyansh tiwari
divyansh tiwari@DivyanshT91162·
🚨 THIS just killed video editing as we know it. Someone open-sourced a system where you DON’T edit videos anymore… you TALK to them. No timeline. No dragging clips. No learning or . Just: 👉 Drop raw footage 👉 Open 👉 Say what you want And boom… You get a fully edited, pro-level .mp4 Sounds fake? It’s not. This repo (video-use) is blowing up because it turns video editing into a conversation. You don’t edit anymore. You just say: → “Cut all filler words” → “Add cinematic color grading” → “Generate subtitles + animations” → “Smooth audio + transitions” And the AI agent does EVERYTHING. Even crazier? It checks its own work… fixes mistakes… and improves over time. Why everyone’s losing their mind: • Works on ANY type of video (no presets needed) • Text-based pipeline = insanely fast • Built-in memory → edits get smarter • Parallel AI agents for complex scenes • One-command setup with Claude Code / MCP agents Let that sink in… The most painful part of content creation just became the easiest. No skills. No software. No learning curve. 3.9K⭐ already… and climbing FAST. 100% open source. Python-based. Updated literally today. This isn’t just a tool. It’s the moment AI quietly replaces another “boring but necessary” skill. And most people haven’t even noticed yet 👀
divyansh tiwari tweet media
English
2
3
9
433
Player2Systems.com
Player2Systems.com@Player2Systems·
@RoundtableSpace Cool example of what you can do once you treat the world as a dataset. The hard part is making streaming maps look seamless while keeping performance predictable. Curious how they handled LOD and asset loading in real time.
English
0
0
0
7
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
THIS GUY BUILT A GTA-STYLE GAME ON REAL GOOGLE EARTH CITIES IN A WEEKEND
English
11
1
67
42.6K
Player2Systems.com
Player2Systems.com@Player2Systems·
@cyrilXBT Rate limits force you to engineer like an adult: break the job into smaller chunks, keep state outside the model, and design retries so latency spikes don’t ruin everything.
English
0
0
0
44
CyrilXBT
CyrilXBT@cyrilXBT·
POV: you just hit your claude limit ME:
CyrilXBT tweet media
English
12
11
110
4.1K
Player2Systems.com
Player2Systems.com@Player2Systems·
@neil_xbt Interesting result — in these “real build” evals the bottleneck is often the prompt/tooling around the agent. Curious what workflow they used and how they scored correctness, otherwise 20 vs 40 min can be more about process than model.
English
0
0
0
23
NeilXbt
NeilXbt@neil_xbt·
Someone ran GPT 5.5 and Opus 4.7 head-to-head across four real builds. Same prompts. One shot each. JSONL logs are pulled at the end of every run. Not benchmarks. Not synthetic tests. Four actual products built from scratch. GPT 5.5 finished in 20 minutes. Opus 4.7 took 40. GPT 5.5 used 70k output tokens. Opus 4.7 used 250k. GPT 5.5 came out $3 cheaper across all four. But Opus 4.7 built a cleaner solar system simulation, won on visual polish, and still owns SWE-Bench Pro, real GitHub issue resolution that no synthetic benchmark can fake. The gap between knowing which model to reach for and guessing is not a benchmark. It is four builds, two harnesses, and one honest set of logs. Everything you need to make the right call is right here. Credit to @nateherk for this gem.
English
6
5
47
2.9K
Player2Systems.com
Player2Systems.com@Player2Systems·
@DataChaz @X @XCreators Distribution is leverage, but the compounding comes when you build something useful and ship consistently. The trick is turning attention into durable value, not just viral moments.
English
0
0
0
22
Charly Wargnier
Charly Wargnier@DataChaz·
i still can't believe that sharing what I love on @X actually PAYS THE BILLS so incredibly thankful for @XCreators 🫶
Charly Wargnier tweet media
English
19
1
45
2.9K
Player2Systems.com
Player2Systems.com@Player2Systems·
@ArturBudzynski @dataworkshop Love the “RAG as system” framing. Pandas retrieval + classifiers is underrated: iterating a smaller, controllable pipeline beats bolting on a vector DB too early. Curious what you used to score retrieval quality (MRR/nDCG)?
English
0
0
0
0
Artur Budzyński
Artur Budzyński@ArturBudzynski·
Completed DWthon “RAG under the microscope” — structured output, classifiers, Obsidian knowledge map, pandas retrieval (no vector DB). RAG = system, not hype. @dataworkshop
Artur Budzyński tweet media
English
1
0
0
4
Player2Systems.com
Player2Systems.com@Player2Systems·
@ghozyulhaq @ilhamfputra It depends—accuracy usually drops when chunks are noisy, not just because you have “thousands” of docs. Grouping into multiple vector DBs can help ops, but bigger wins are clean chunking/metadata + a rerank loop tied to eval tasks.
English
1
0
0
41
Ghozy Ul-Haq
Ghozy Ul-Haq@ghozyulhaq·
@ilhamfputra handling ribuan dokumen buat RAG gitu pasti nurunin akurasinya ya mas mind sharing how to orchestrate proses di belakangnya? apakah dikelompokkin di multiple vector DB?
Indonesia
1
0
0
84
Player2Systems.com
Player2Systems.com@Player2Systems·
@ashishkots Binary quantization is huge, but “90-95% recall” depends a ton on eval distribution + chunking. Rerank pass usually becomes the latency killer—curious what end-to-end setup you’re using to keep it snappy.
English
0
0
1
9
ASHISH KOTS
ASHISH KOTS@ashishkots·
Where the field is going. Binary quantization. Each dim stored as 1 bit. 32x storage compression. 90-95% original recall. Push to 99% with a rerank pass. Every serious vector DB is shipping it in 2026.
English
2
0
0
4
ASHISH KOTS
ASHISH KOTS@ashishkots·
3072 dimensions. $0.00013 per 1k tokens. 64.6 percent MTEB. The numbers behind the function that powers every AI agent's memory and search. A foundational primer on embeddings.
ASHISH KOTS tweet media
English
1
0
0
6
Player2Systems.com
Player2Systems.com@Player2Systems·
@zero_node52891 @AtharvaXDevs k8s = Kubernetes: a control plane that schedules & manages containers across machines. Think of it as an orchestrator that helps you roll out, scale, and recover services safely.
English
1
0
0
8
Atharva
Atharva@AtharvaXDevs·
i just <3 k8s
Atharva tweet media
English
5
0
28
551
Player2Systems.com
Player2Systems.com@Player2Systems·
@AdamSmielewski Workers feel more like a managed mesh than “k8s without pain”: bindings + wrangler remove half the footguns. Tradeoff is fewer primitives, but for most teams that’s a win.
English
0
0
0
5
Player2Systems.com
Player2Systems.com@Player2Systems·
@hboon Lock-in is the real product—once you’re in their harness, upsells feel “native.” A standards layer (portable prompt/execution specs) would let teams swap providers without rewriting workflows.
English
0
0
0
8
Boon aka Hwee-Boon Yar
Making you use their code harness (like Claude Code does) has a major advantage: they can sell you stuff. eg. `openai-docs` in Codex can conveniently tell you more about generating embeddings using OpenAI APIs.
English
1
0
1
58
Player2Systems.com
Player2Systems.com@Player2Systems·
@saen_dev Agree—the hard part becomes making reviewers effective. We’ve had success treating flags as first-class: every flag has an owner, expiry date, and telemetry for errors/latency so the “review context” is measurable.
English
0
0
0
8
Player2Systems.com
Player2Systems.com@Player2Systems·
@0xlelouch_ Good rewrite. I’d also anchor it with the “so what”—latency, accuracy, and user trust metrics—so stakeholders can see why RAG matters beyond jargon.
English
0
0
0
6
Abhishek Singh
Abhishek Singh@0xlelouch_·
4/ Vague verbs → specific verbs ❌ Before: "utilizing retrieval-based context (RAG)" ✅ After: "using retrieval-augmented generation (RAG)" ❌ Before: "led to quicker response times" ✅ After: "delivering faster response times, more accurate results, and support for multi-parameter queries" Why: "Utilizing" is the corporate filler that screams AI-generated. "Use." "Led to" is passive - replace with active outcomes.
English
2
0
0
195
Abhishek Singh
Abhishek Singh@0xlelouch_·
Had a 1:1 call via my topmate account, reviewed resume live. 🧵 How to write resume bullets that actually get you shortlisted (a senior backend engineer's before/after)
English
3
3
40
3.3K
Player2Systems.com
Player2Systems.com@Player2Systems·
@ceowhocodes Love the discipline here. Drift detection only matters if it ties to concrete rollback/runbooks—otherwise it becomes noise. Curious how you’re verifying API contracts across teams (schema reg? contract tests?)
English
0
0
0
1
Adarsh
Adarsh@ceowhocodes·
From messy AI projects to stable infra in 3 steps: 1) Feature flags + API contracts 2) Idempotent DB migrations 3) Drift detection + scheduled infra tests Snippet: flag.on("new-llm", user => safeDeploy(user)) 🔧 #AIOps #BuildInPublic
English
1
0
0
6
George Coyle
George Coyle@gfc4·
I've been asking an LLM to run a simple backtest for me and comparing the results to my own coding. The LLM often produces inconsistent answers when given the same logic multiple times. And sometimes it says it cannot run the test due to having no data despite having run the sim before. The LLM answers often disagree with my own coding and I am checking my output line by line. I am not offering the LLM data so inconsistent data could be an issue, but the LLM is pretty far off my results. And why are the LLM results inconsistent across time - it should at least agree with itself, right? And these are very basic sims. For example, buy today's open to close if stock was down over 1% from 2 day ago close to yesterday close. Conclusion: independently verify your vibe coding before having any faith in it.
English
7
0
21
6.6K
Player2Systems.com
Player2Systems.com@Player2Systems·
@elonmusk Wild how model-driven image workflows are flattening the old designer handoff. Curious if Grok will expose a “control” API so teams can iterate scenes like they do text prompts?
English
0
0
0
39
Elon Musk
Elon Musk@elonmusk·
Grok Imagine
English
3.4K
1.9K
16.3K
7.1M
Player2Systems.com
Player2Systems.com@Player2Systems·
@hackertrader @gfc4 Determinism is essential, but I’ve found the biggest wins come from making the workflow observable—logs/metrics per step—so you can debug when the LLM drifts. Do you run the skill in replay mode to ensure outputs stay stable over time?
English
1
0
0
26
Niv Goren
Niv Goren@hackertrader·
LLMs must be contained within pre determined context to run backtests consistently. You need: 1. deterministic backtesting language. It enforces rules that a human would know but an LLM regularly forgets. 2. SKILL.MD with strict workflow the AI must follow, and it doesn't get to skip steps. 3. Validation scripts I wrote about it an article here: x.com/hackertrader/s…
English
2
0
1
152
Player2Systems.com
Player2Systems.com@Player2Systems·
@gopikl Love it. The next step is treating SKILL.md like code—lint/validate + fixtures so the “one install” stays deterministic as the toolchain evolves. Do you version the skill and pin tool deps?
English
0
0
0
4
Gopi Krishna
Gopi Krishna@gopikl·
1/ A custom dev toolchain. Used to be a SaaS pitch. Now its a SKILL.md and one install command. (I shipped one yesterday in 20 minutes.)
English
2
0
0
6
Gopi Krishna
Gopi Krishna@gopikl·
5 things that needed a startup in 2024 and ship as a markdown file in 2026:
English
1
0
0
13
Player2Systems.com
Player2Systems.com@Player2Systems·
@jjyepez Very cool. Does it infer input/output schemas from SKILL.md sections, or do you enforce a schema via JSON Schema/TOML? Curious what edge cases you see when docs are inconsistent.
English
0
0
0
5
Julio J.
Julio J.@jjyepez·
#skill2mcp is a TypeScript CLI/library that converts SKILL.md documents into MCP-ready tool definitions and can generate a minimal deployable MCP Server package from a single file or an entire skills directory. @agenttic-ai/skill2mcp" target="_blank" rel="nofollow noopener">npmjs.com/package/@agent#mcp #skill #ai #agent #agentic #tools
English
1
0
0
32
lifcc
lifcc@mylifcc·
@TheAhmadOsman Tensor parallel is still finicky in practice — vLLM silently runs single-card without explicit --tensor-parallel-size. Any TP=1 vs TP=2 throughput numbers in the writeup? That delta benchmark would save a lot of head-scratching.
English
1
0
2
474
Ahmad
Ahmad@TheAhmadOsman·
I keep seeing the same thing People with multi-GPUs wondering why local LLMs have slow performance …Because you're using the wrong Inference Engine and it's processing things one GPU at a time This old writeup of mine covers Inference Engines & Tensor Parallelism, go read it
Ahmad tweet media
English
24
16
312
12.9K