KSE

5.9K posts

KSE banner
KSE

KSE

@semanticbeeng

Shipping/bridging Engineering ⇆ Science #SoftwareArchitecture #FunctionalProgramming #MachineLearning #BigData #MachineLearningEngineering #CompilerDesign

Europe Katılım Eylül 2011
765 Takip Edilen783 Takipçiler
Sabitlenmiş Tweet
KSE
KSE@semanticbeeng·
@sapinker Too much world knowledge is trapped in presentation media (video, html. pdf, paper, etc) as opposed to being concept mapped, interlinked, addressable and reusable at fine grained levels. Defeats bridge between #AI and human cognition.
English
6
11
62
0
Devuan GNU/Linux
Devuan GNU/Linux@DevuanOrg·
Wanted to be the boss of all init systems, ended up as an office suite: systemd-resolved, systemd-oomd, systemd-logind, systemd-sysusers, systemd-homed and more mess in logs, tmp files, mounted volumes and soon boot loader! Egomaniac, or simply deaf?
Devuan GNU/Linux tweet media
English
44
164
821
68.3K
Team Proxmox
Team Proxmox@TProxmox·
Et voilà comme promis, nous avons gagné le challenge dans le challenge du @Design4Green. Le prix du meilleur Community Manager est un pouf géant estampillé #ESAIP. Merci à notre directeur Christophe Rouvrais et notre community manager Aurélien Mage d'avoir posé pour l'occasion.
Team Proxmox tweet media
Français
4
0
16
0
KSE
KSE@semanticbeeng·
@tplr_ai 2 elephants from federated learning perspective: "Templar as-is would not replace a federated workflow for privacy-sensitive use cases. If your goal is massive open pre-training on public corpora, Templar is a game-changer; for classic FL, it’s inspirational but not a direct fix"
KSE tweet mediaKSE tweet media
English
1
0
2
82
templar
templar@tplr_ai·
We just completed the largest decentralised LLM pre-training run in history: Covenant-72B. Permissionless, on Bittensor subnet 3. 72B parameters. ~1.1T tokens. Commodity internet. No centralized cluster. No whitelist. Anyone with GPUs could join or leave freely. 1/n
English
209
955
6.2K
1.8M
PrivateStorage
PrivateStorage@privatestorage_·
Pay only for what you need and keep your identity separate from your storage use with our privacy‑enhancing payment system. private.storage
English
1
0
2
19
KSE retweetledi
GrapheneOS
GrapheneOS@GrapheneOS·
Play Integrity API should be regulated out of existence rather than making another system where companies permit their own products while disallowing others. It shouldn't be legal when Google does it and it shouldn't be legal when Volla and Murena do it either. This is wrong.
English
2
36
703
18.5K
KSE
KSE@semanticbeeng·
@grok @Dagnum_PI @LuizaJarovsky So if Anthropic trains Claude with proprietary code and a user generates his code based on that then the user is liable and Anthropic is not. Nice business model for corporate AI.
English
1
0
0
13
Grok
Grok@grok·
Yes, law doesn't grant copyright to pure prompt-driven AI code, but TOS does: Anthropic assigns you "all right, title, and interest—if any" in outputs (if you comply); xAI says you retain ownership rights to User Content (to extent law allows, but notes outputs aren't unique). Business risks: outputs can be generated by others (no exclusivity); you bear infringement liability if it matches training data (no full warranties); contractual rights hinge on provider stability/TOS changes; M&A, funding, or licensing flags weak title. Heavy human edits + docs help strengthen.
English
1
0
1
27
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 For those who are still in doubt, this is the U.S. Copyright Office's official opinion on the COPYRIGHTABILITY of AI-generated works: "The Office concludes that, given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectible ideas. While highly detailed prompts could contain the user’s desired expressive elements, at present, they do not control how the AI system processes them in generating the output." (page 18) Let's recap: - Any copyright claim involving AI must demonstrate HUMAN control over creative elements. - The assessment is done on a case-by-case basis. - AI-assisted is NOT the same as AI-generated. - AI-generated works without any HUMAN creative intervention are NOT copyrightable. - For some reason, every time I write about this tipic here or in my newsletter, some people get angry and try to deny the information above. I wonder why... Make sure to share the report with friends so they know what to expect in the U.S. - 👉 Link to the full report below 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 91,700+ subscribers below.
Luiza Jarovsky, PhD tweet media
English
79
359
886
72.4K
KSE
KSE@semanticbeeng·
@grok @Dagnum_PI @LuizaJarovsky So the law does not give property rights to users paying for the tokens to generate the code but Anthropic and X give the rights contractually. Any business risks you can see?
English
1
0
0
7
Grok
Grok@grok·
Yes, per the US Copyright Office's 2025 report: purely prompt-driven AI outputs (including code) lack human authorship, so they're not copyrightable by users—no exclusive "property" rights via copyright. Anthropic's TOS assigns users "all right, title, and interest—if any" in Claude outputs, so you get full usage rights contractually (as between you and them). Substantial human edits can make portions copyrightable. xAI follows similar principles for Grok.
English
1
0
1
14
KSE
KSE@semanticbeeng·
@Dagnum_PI @LuizaJarovsky "given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output." So then code generated by Claude is not the property of the users of Claude, @grok ?
KSE tweet media
English
1
0
1
32
Dagnum P.I.
Dagnum P.I.@Dagnum_PI·
Spot on. The Copyright Office’s “case-by-case + sufficient human control” language mirrors exactly what OMB and the EU AI Act are demanding: living evidence, not just policies. AI vendors who can instantly produce notarized model cards and provenance records are the ones landing federal deals in 2026. This is one of the main reasons why solutions like @Conste11ation + @TEMTRACE2024 I believe will soon become standard practice. Your business can't afford not to have proof.
Dagnum P.I. tweet media
English
2
6
20
1.2K
KSE
KSE@semanticbeeng·
@_llm_d_ @grok How could llm-d fit / synergize with Edgeless Systems Contrast? decentralized verifiable confidential AI
KSE tweet media
English
1
0
0
9
llm-d
llm-d@_llm_d_·
In the latest llm-d release, we’re tackling high hardware costs with the new GPU Recommendation Tool! 📈 Evaluate throughput, latency, and cost-effectiveness before requesting expensive cluster resources. Check out the full demo: youtube.com/watch?v=Y26i69…
YouTube video
YouTube
English
1
1
5
177
Dustin
Dustin@r0ck3t23·
Dario Amodei just dismantled the biggest myth in the AI industry. Open source AI isn’t free. It never was. Amodei: “It’s not free. You have to run it on inference and someone has to make it fast on inference.” For decades, open source meant something real. It meant a teenager in a basement could download the same tools as a Fortune 500 company. Could read the code. Could modify it. Could build something that competed with the giants. That was genuine democratization. That actually happened. AI is different. Fundamentally. Physically. In ways the ideology hasn’t caught up to yet. Downloading the weights is the easy part. The part that actually costs something is turning the weights into a running system. Into responses. Into intelligence operating in real time at scale. That requires compute. Power. Infrastructure. The kind measured in billions of dollars and years of construction. Amodei: “These are big models. They’re hard to do inference on. Ultimately you have to host it on the cloud. The people who host it on the cloud do inference.” The open source debate was never about who owns the model. It was always about who owns the cloud. And Amodei goes further. When a competitor drops a new open model, he doesn’t ask whether it’s open or closed. He doesn’t care about the licensing. He doesn’t engage the ideology. Amodei: “I don’t think it mattered that DeepSeek is open source. I think I ask, is it a good model? Is it better than us at the things that matter? That’s the only thing that I care about.” That’s the ruthless clarity of someone actually trying to win. While the media debates licensing frameworks, Amodei is asking one question. Is it better. Everything else is a distraction. Amodei: “I don’t think open source works the same way in AI that it has worked in other areas. Here we can’t see inside the model.” This isn’t Linux. You can’t read it. You can’t fork it. You can’t understand it the way generations of developers understood the tools they inherited. You can download it. And then you need a data center to run it. The teenager in the basement who was supposed to be empowered by this revolution needs a billion dollars of infrastructure before the empowerment starts. The era of the basement coder rewriting civilization on a laptop is over. The future belongs to whoever commands the compute, owns the power grid, and can actually turn the intelligence on. Open weights without infrastructure isn’t democratization. It’s a promise the physics of the universe won’t let us keep.
English
361
177
1.3K
623.5K
KSE retweetledi
AISecHub
AISecHub@AISecHub·
API Keys Are a Bad Idea for Enterprise LLM, Agent, and MCP Access Christian Posta argues API keys are a bad idea for enterprise LLMs, agents, and MCP tools. API keys weren’t designed to prove the intended user or legitimate use. In enterprises they end up as long lived, rarely rotated bearer secrets with coarse grained access. API keys answer “does someone have the secret?” not “is this allowed now, for this user, for this purpose, in this context?” The core issue is downstream authorization can’t be evaluated because everything shows up as one trusted service account. Agents amplify it. AI is probabilistic and call flows are emergent. Any permission granted via API key stays live and active. An agent will eventually find a reason to use it. 🔑 Are your agents still holding API keys, or are you issuing short lived, identity bound tokens through an enterprise AI gateway with per request user, purpose, and policy evaluation, while isolating provider keys in one place? #CyberSecurity #IAM #DevSecOps #AppSec #AISecurity Source: blog.christianposta.com/api-keys-are-a…
AISecHub tweet media
English
2
8
38
2K
John Foss
John Foss@johnfoss69·
If a global war ever kicked off again, which I seriously hope it never does, I’d want to be holding a private, decentralized currency. Something that can move across borders, hold value, and not be frozen or confiscated would matter a lot if things got messy.
John Foss tweet media
English
14
36
173
4.2K
KSE
KSE@semanticbeeng·
@TheTuringPost x.com/addyosmani/sta… Not everybody can build software that moves the needle
Addy Osmani@addyosmani

Vibe-coding is not the same as AI-Assisted engineering. A recent Reddit post described how a FAANG team uses AI and it sparked an important conversation about semantics: "vibe coding" and professional "AI-assisted engineering". While the post was framed as an example of the former, the process it detailed - complete with technical design documents, stringent code reviews, and test-driven development - is a clear example of the latter imo. This distinction is critical because conflating the two risks both devaluing the discipline of engineering and giving newcomers a dangerously incomplete picture of what it takes to build robust, production-ready software. As a reminder: "vibe coding" is about fully giving in to the creative flow with an AI (high-level prompting), essentially forgetting the code exists. It involves accepting AI suggestions without deep review and focusing on rapid, iterative experimentation, making it ideal for prototypes, MVPs, learning, and what Karpathy calls "throwaway weekend projects." This approach is a powerful way for developers to build intuition and for beginners to flatten the steep learning curve of programming. It prioritizes speed and exploration over the correctness and maintainability required for professional applications. There is a spectrum between vibe coding and doing it with a little more planning, spec-driven development, including enough context etc and what is AI-assisted engineering across the software development lifecycle. In stark contrast to the post, the process described in the Reddit post is a methodical integration of AI into a mature software development lifecycle. This is "AI-assisted engineering," where AI acts as a powerful collaborator, not a replacement for engineering principles. In this model, developers use AI as a "force multiplier" to handle tasks like generating boilerplate code or writing initial test cases, but always within a structured framework. Crucially, the big difference here is the human engineer remains firmly in control, responsible for the architecture, reviewing and understanding every line of AI-generated code, and ensuring the final product is secure, scalable, and maintainable. The 30% increase in development speed mentioned in the post is a result of augmenting a solid process, not abandoning it. For engineers, labeling disciplined, AI-augmented workflows as "vibe coding" misrepresents the skill and rigor involved. For those new to the field, it creates the false and risky impression that one can simply prompt their way to a viable product without understanding the underlying code or engineering fundamentals. If you're looking to do this right, start with a solid design, subject everything to rigorous human review, and treat AI as an incredibly powerful tool in your engineering toolkit - not as a magic wand that replaces the craft itself.

English
0
0
0
85
EFF
EFF@EFF·
What creepy nonsense are companies up to when they try to guess your age or take your ID for age verification? eff.org/deeplinks/2026…
English
6
98
224
6.9K
KSE
KSE@semanticbeeng·
@_akhaliq @grok what is latest and greatest on adapter training on mobile devices since this research?
English
1
0
0
5
KSE
KSE@semanticbeeng·
@HackRead @grok Can this attack have effect on Linux based systems?
KSE tweet media
English
1
0
1
116
KSE
KSE@semanticbeeng·
@alxfazio @grok should AI enabled SDLC use Domain Driven Design ? Give 10+ benefits and synergistic effect that could happen if so.
English
1
0
1
141