Lutfi Garzon

538 posts

Lutfi Garzon banner
Lutfi Garzon

Lutfi Garzon

@lutfi_gr

I’m a Flutter developer based in Panama 🇵🇦

Panama city, Panama Katılım Kasım 2018
175 Takip Edilen38 Takipçiler
Sabitlenmiş Tweet
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
Vault CLI v1.4.0 just shipped 🚀 Multi-cloud CI/CD runtime is here. vault ci now works with AWS, Azure, and GCP. No SDK bloat all REST APIs. What's new: ◈ GCP workload identity federation vault oidc generates Terraform for Google Cloud alongside AWS and Azure ◈ Azure Key Vault + GCP Cloud KMS runtime decrypt secrets using any cloud's KMS, not just AWS ◈ CloudKmsProvider interface swap clouds without touching CI logic. Set VAULT_CLOUD_PROVIDER to disambiguate ◈ vault ci --check verify your OIDC chain works before the real pipeline runs ◈ vault oidc --dry-run preview templates to stdout ◈ vault oidc --force skip the overwrite prompt Fixes: ◈ Branch overrides removed --env prod no longer silently changes your branch ◈ Azure GitLab audience corrected was using the wrong token exchange audience ◈ Azure template now generates the service principal, not just the app registration ◈ GCP templates now self-contained project_id variable and service account included ◈ TLS thumbprint rotation warnings on AWS templates @lutfi.gr/vault" target="_blank" rel="nofollow noopener">npmjs.com/package/@lutfigithub.com/LutfiGarzon/va…
English
0
0
0
28
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@badlogicgames everyone is entitled to happiness not just the way we expecting 😂😎🫶
English
0
0
1
22
Mario Zechner
Mario Zechner@badlogicgames·
framing this one
Mario Zechner tweet media
English
2
2
137
5.9K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@thdxr @badlogicgames hahaha been there burning with the team is funnier, now we remember and laugh, right now we not working together anymore but at least we still friends, just working separate for the moment until the next big idea to burn trough 😂
English
0
0
0
2.5K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@Dirk_Matter @realDrTT @AFpost literally was coming to post the same thing, I mean senil issues could be affecting Dawkins or biases but I don’t think is specific to people wether you’re a believer or not, like anyone could fall to this trap.
English
0
0
1
70
Dirk Matter
Dirk Matter@Dirk_Matter·
@realDrTT @AFpost As an atheist and a computer scientist, I agree that LLMs are not conscious. I can also assure you that agentic AI does NOT lead me to any kind of spiral, nihilistic or otherwise. I have no need for your supernatural mumbo jumbo.
English
4
2
61
2.1K
AF Post
AF Post@AFpost·
Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost
AF Post tweet mediaAF Post tweet media
English
2.6K
548
6.4K
9.2M
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@badlogicgames @meln1k AGI was the bait i’ll rather stuck with OSS models than let this labs dictate how I setup my workflows.
English
0
0
0
262
Mario Zechner
Mario Zechner@badlogicgames·
@meln1k sure, but if the promise is AGI, than wtf is this?
English
3
0
17
1.8K
Mario Zechner
Mario Zechner@badlogicgames·
guess it's time to build my own model with spit and duct tape as well now. what a time to be alive ... ridonculous.
English
12
1
309
18.6K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@neogoose_btw your questions is flawed, the provider controls the rate limits, and the compute bounds the inference speed. token efficiency is bound by user needs. 2 distinct domains lol
English
0
0
0
44
Dmitriy Kovalenko
Dmitriy Kovalenko@neogoose_btw·
@lutfi_gr Yes I know lol and that’s why I’m asking what’s more important either token (not result) efficiency or speed assuming result is the same
English
1
0
0
84
Dmitriy Kovalenko
Dmitriy Kovalenko@neogoose_btw·
what is more important for coding agent harness speed or token efficiency?
English
28
0
25
5.5K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
but is entirely dependent on the given task there’s no way to optimize a grep call without having the context of the task, if your let’s say claude.md file contains rich structure of the codebase organization a grep tool call would be easy and 1 shot the search right away in just 1 tool call you can find the file right away, that doesn’t have anything to do with the tool efficiency.
English
1
0
1
65
Dmitriy Kovalenko
Dmitriy Kovalenko@neogoose_btw·
@lutfi_gr faster toolcalls not just means hitting faster apis - it’s all about wall time If I can have a grep tool that does search not in 100ms but in 10. It’s gotta save 10s of seconds per run
English
1
0
0
67
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
faster tool calls: just means hitting their api faster, probably getting high rate limit errors. fewer round trips: mean the model doesn’t hallucinate while planning and executing the steps, also depends on the llm and clearer instructions. context management: only is bound to your workflow token efficiency from the harness means less instructions and less tools available from the provider on the base tool after that all is injected by the consumer, so I don’t think harness can control much this one. see pi coding agent for example is more token efficient because of the minimalist system prompt and tools available in general.
English
1
0
1
54
Dmitriy Kovalenko
Dmitriy Kovalenko@neogoose_btw·
@lutfi_gr There are many ways for coding harness to optimize speeds - faster toolcalls - fewer roundtrips - better context management
English
1
0
0
221
Mario Zechner
Mario Zechner@badlogicgames·
@FredKSchott i am sooo confused! this aligns strongly with my pi refactor thoughts and now i ... i'm confused :D
English
6
0
85
12.9K
Anees Merchant
Anees Merchant@aneesmerchant·
Spent a few hours on V4-Pro last week and the price-performance jump on reasoning-heavy tasks is real. The catch most teams miss is data residency. For Indian and EU enterprise buyers, the model has to be hosted somewhere they trust before any of the cost gains matter. Capability is not the bottleneck for them, hosting is.
English
2
0
4
1.1K
elvis
elvis@omarsar0·
I have been testing DeepSeek-V4-Pro with the Pi coding agent. I am mindblown by how well it works out of the box. A few notes: I spent a few hours building an LLM wiki with an agent powered entirely by DeepSeek-V4-Pro on @FireworksAI_HQ inference. This is the first time I feel like there is an open-weight model that can reason at the level of Claude and Codex. And it does this in a cost-effective way with support for 1M context length. To be clear, I am using DeepSeek-V4-Pro inside of Pi without any special configuration. It works out of the box. It's exciting that there is a model that can just be plugged into a basic harness like Pi, and it just works. I've never seen that before. Most models require lots of configuration and setup. @deepseek_ai's DeepSeek-V4-Pro is clearly good at agentic coding (probably the best from the open-weight models), but the model is also great on knowledge-intensive tasks where reasoning matters. The agent pulled agentic engineering best practices from different company docs (Anthropic, OpenAI, Google, Stripe, Meta, Modal, DeepSeek, Mistral, Cohere), searched and digested Reddit and HN threads, summarized arxiv papers, and surfaced trending GitHub repos. Then it distilled everything into actionable tips across categories. I love the Wiki it built. The quality is really good. Here is a snapshot of what the wiki looks like: github.com/dair-ai/dair-w… DeepSeek-V4-Pro handled the task without breaking stride. Multi-step research queries, code generation for scaffolding, context-heavy reasoning across disparate sources. For coding specifically, this is the first open-weight model that genuinely feels like a Codex or Claude Code experience. It compares in capability and actual multi-turn agentic work. What made the loop feel so responsive was Fireworks' inference speed (the fastest in the market) and the fact that they actually validate models at the systems level before shipping. No corrupted reasoning traces. Just fast, reliable iteration. The hybrid CSA and HCA attention design cuts KV cache to just 10% and inference FLOPs by nearly 4x at 1M-token context. This is what makes the agent loop actually fast and cheap enough to run in practice. For devs who've been watching open-weight models close the gap but haven't found one that actually delivers in practice, this is the closest I've seen. Try it here: app.fireworks.ai/models/firewor…
English
41
56
487
52.4K
Mario Zechner
Mario Zechner@badlogicgames·
People of pi.dev New @MistralAI model, and forking company AI Gateway release! I also ripped out: - Antigravity and Gemini CLI support - mom, see github.com/earendil-works… instead, build your own, or use an old commit - pods I'll never get to pi server ...
Mario Zechner tweet mediaMario Zechner tweet media
English
15
12
231
25.2K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@badlogicgames no need, only api key is fine anyways i’m moving to open code go subscription maybe that one makes more sense to support 🫶
English
0
0
12
3.1K
Mario Zechner
Mario Zechner@badlogicgames·
People of pi. I'm removing Gemini CLI and Antigravity logins from pi. Welcome to 2026, the year of the end of subsidies.
English
74
30
1.4K
136.5K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@peteralexbizjak would you like to try 🫶😛 this is for secrets only but would work for documents too 👀 @lutfi.gr/vault" target="_blank" rel="nofollow noopener">npmjs.com/package/@lutfi
English
0
0
0
66
p19k
p19k@peteralexbizjak·
A secure personal document vault must be designed with zero-knowledge, full end-to-end encryption, and assume that loss of the device or decryption key results in the complete loss of data. The implementation should be open-source, and the code should undergo independent auditing. If you cannot achieve that, the personal document vault is not "secure".
English
3
1
5
597
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
Just shipped flexoki-pi-theme — Flexoki color scheme themes for pi, the terminal-native coding agent by @badlogicgames. Two variants, faithful to @kepano's original palette: • flexoki-dark — inky #100F0F background • flexoki-light — warm #FFFCF0 paper 51 color tokens covering everything pi renders: UI chrome, message bubbles, markdown, diffs, full syntax highlighting, thinking mode borders, and bash-to-command. pi hot-reloads the theme on save, so tweaking colors is instant. ``` pi install npm:flexoki-pi-theme ``` npmjs.com/package/flexok… (npmjs.com/package/flexok…) Shoutout to Steph Ango (@kepano) for Flexoki — an inky, warm-toned palette that makes reading code feel like a printed page — and Mario (@badlogicgames) for building an AI agent that respects the terminal. Two indie projects that pair beautifully.
English
0
0
0
58
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
Yeah, that’s why I moved away from the Gemini CLI and Antigravity. The issues are incredibly simple to fix, and the current setup offers zero observability during the running process. Conversely, running this on the Pi allows me to watch the thinking process, steer it if necessary, and continuously add tasks. I've already implemented thought signatures and dynamic tool calling. There is so much they could do to improve stability and reliability just by following their own documentation. 😂
English
0
0
1
61
BURKOV
BURKOV@burkov·
Did anyone try to use Gemini 3.1 Pro with Codex as the harness? Is Antigravity the problem with using Gemini for agentic coding, or is it the LM?
English
50
1
322
45.4K
Lutfi Garzon
Lutfi Garzon@lutfi_gr·
@Jacoob_shi @burkov gemini new models have thought signature and tool calls special turn system, you need an special workaround to make it more reliable but have been great for my use case no hallucinations and dynamic loading for tool calls no need to load all schemas right from the start.
English
0
0
0
17
Jacob Shi
Jacob Shi@Jacoob_shi·
@burkov tool calling reliability feels like the bigger issue tbh, not the raw LM. every combo I tried breaks there first
English
1
0
6
2.2K