Copyleaks

2.3K posts

Copyleaks banner
Copyleaks

Copyleaks

@Copyleaks

Ensuring #AI governance and compliance, protecting and safeguarding IP, and maintaining academic integrity.

New York, NY Beigetreten Kasım 2012
1.8K Folgt2.9K Follower
Angehefteter Tweet
Copyleaks
Copyleaks@Copyleaks·
Who is Copyleaks? We’re a leading AI text analysis platform dedicated to empowering businesses and educational institutions as they navigate the ever-evolving landscape of genAI. With an award-winning suite of AI-powered tools trusted by millions, we ensure AI governance, empower responsible AI adoption, safeguard IP, protect intellectual property, and maintain academic integrity with comprehensive AI and plagiarism detection.
English
4
3
17
2.7K
Copyleaks retweetet
Chris Harihar
Chris Harihar@ChrisHarihar·
Sora blocks the n-word in prompts (I mean, duh)—BUT doesn't catch phonetic workarounds like "knitter." New research from @Copyleaks found Cameos of @mcuban, @jakepaul, @Amouranth, and @sama being used to scream "workaround slurs" on the platform. Examples here (WARNING):
English
3
4
11
1.9K
Copyleaks retweetet
Rolling Stone
Rolling Stone@RollingStone·
OpenAI’s Sora 2 Can Generate Videos of Celebrities Appearing to Shout Racial Slurs New research by AI detection firm Copyleaks found deepfake clips of Jake Paul, Mark Cuban, and Sam Altman spewing what sound like offensive epithets. rollingstone.com/culture/cultur…
English
9
7
13
12.3K
Copyleaks retweetet
scott budman
scott budman@scottbudman·
#New: The “resignation letter” from Fed Chair Jerome Powell is fake. @Copyleaks scanned it; they say it’s 100% AI generated.
scott budman tweet media
English
2
5
15
3.3K
Copyleaks
Copyleaks@Copyleaks·
What's does this all mean? What are the impacts and what's next? The rise of models like DeepSeek emphasizes the need to stay at the forefront of AI technology while prioritizing ethics and data security. Growing global competition in AI development drives innovation but raises concerns about data privacy, security, and ethical implications. Collaboration among companies, policymakers, and researchers is essential to create frameworks ensuring responsible AI development, addressing privacy, security, and societal impacts.
English
1
0
3
354
Copyleaks
Copyleaks@Copyleaks·
Everything you need to know about #DeepSeek, the AI disruptor shaking up the global tech landscape, a 🧵⬇️
English
1
0
5
425
Copyleaks
Copyleaks@Copyleaks·
AI-generated content is reshaping education. With 99.8% accuracy, Copyleaks ensures educators can confidently detect AI-written text. Our advanced AI detection keeps academic integrity intact. @matt_barnum and @dseetharaman highlight how widespread and difficult-to-detect AI cheating has become and how teachers and parents monitor its misuse: tinyurl.com/3wem3pbh via @WSJ #AI #EdTech #AcademicIntegrity
English
3
1
3
261
Arnaud Bertrand
Arnaud Bertrand@RnaudBertrand·
People are rightly ridiculing OpenAI over its accusations of Deepseek using their output to train their model, but most people are missing the truly terrifying implications here. The far more worrying aspect here is that OpenAI is suggesting that there are some cases in which they own the output of their model. Now think for a minute what this means in the world of tomorrow where so much will be generated by AI (and already is): all the software code, the emails we send each others, the videos and images, etc. Do you want to live in a future where, if for some reason the AI giants are dissatisfied with the way you use the output of their model, they can claim ownership of it? A future where every piece of content touched by AI - which might be virtually everything in the world of tomorrow - comes with invisible strings attached? The implications for innovation and creativity are staggering. Small businesses and independent developers who rely on AI tools could find themselves trapped in a web of intellectual property claims. Worse, we're looking at a future where the very act of learning and building upon existing knowledge becomes gated by the interests of AI giants. This would be techno-feudalism on steroids: if we don't challenge this now, we risk sleepwalking into a future where human creativity and innovation become the property of a bunch of AI overlords, and a world where they can dictate not only who gets to innovate, but what kind of progress is acceptable based on their own interests.
Arnaud Bertrand tweet media
English
348
1.9K
6.8K
382.3K
@amuse
@amuse@amuse·
CHINA: China is illegally using American Ai technology made by $NVDA to build a powerful Ai model called DeepSeek R1. The LLM was released it for free using an open source license. This is a massive blow to US Ai companies like OpenAi and Antropic.
English
703
1.7K
6.1K
1.3M
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 BREAKING: OpenAI says there is evidence that DeepSeek distilled the knowledge out of OpenAI's models, BREACHING its terms of use and infringing on its intellectual property. What everybody in AI should know: 1. What is knowledge distillation in AI? According to IBM: "Knowledge distillation is a machine learning technique that aims to transfer the learnings of a large pre-trained model, the 'teacher model,' to a smaller 'student model.' It’s used in deep learning as a form of model compression and knowledge transfer, particularly for massive deep neural networks. The goal of knowledge distillation is to train a more compact model to mimic a larger, more complex model. Whereas the objective in conventional deep learning is to train an artificial neural network to bring its predictions closer to the output examples provided in a training data set, the primary objective in distilling knowledge is to train the student network to match the predictions made by the teacher network." 2. What do OpenAI's terms of use say: "What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not: - Use our Services in a way that infringes, misappropriates or violates anyone’s rights. - Modify, copy, lease, sell or distribute any of our Services. 👉 Attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law). - Automatically or programmatically extract data or Output (defined below). - Represent that Output was human-generated when it was not. - Interfere with or disrupt our Services, including circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services. 👉 Use Output to develop models that compete with OpenAI." If they manage to prove knowledge distillation, it might be a violation of OpenAI's terms of use, and they might take legal action against DeepSeek. If this happens, it will be a long and challenging litigation process; also, remember that OpenAI is based in the U.S., and DeepSeek is based in China. What wasn’t on your 2025 bingo card was OpenAI becoming an advocate for intellectual property rights, right? 👉 NEVER MISS my AI governance updates [including breaking news like this one]: join 49,700+ readers who subscribe to my weekly newsletter (link below).
Luiza Jarovsky, PhD tweet media
English
117
39
213
320.9K
Rob Holmes
Rob Holmes@HolmesPI·
OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too buff.ly/3WRQ5jw
Rob Holmes tweet media
English
1
0
0
51
NeetCode
NeetCode@neetcode1·
Now that DeepSeek has gone mainstream, I think most people are completely misinterpreting what it means. It's not really about China vs US. DeepSeek doesn't prove that China is outcompeting the US, it proves that there wasn't really much of a competition to begin with. DeepSeek showed that the best closed source models (like 4-o, o1, etc) can be "copied" and made open source. They did this by using inputs & outputs from existing GPT models for their own training data. Technically this violates OpenAI's terms of service, but it's ironic because OpenAI was only able to create their models by violating everyone else's ToS and stealing everyone's data. This is a very good thing for everyone except for people who thought they could make massive profits from closed source models. Everyone is talking about it now, but this actually became clear on Dec 26 when DeepSeek released V3. Sam Altman couldn't help but subtweet about them. It doesn't matter how good of a model OpenAI or any company makes now. It will easily be copied and made open source now.
NeetCode tweet media
English
29
83
1.1K
63.7K
Arnaud Bertrand
Arnaud Bertrand@RnaudBertrand·
All benchmarks now confirm it: Deepseek is truly is as good as OpenAI's o1 (which is top of the range) for 3% of the price. Boom. And that's when you want to pay for the API. You can also use it Open Source for "free" (which you can't do with o1). There's no overstating how profoundly this changes the whole game. And not only with regards to AI, it's also a massive indictment of the US's misguided attempt to stop China's technological development, without which Deepseek may not have been possible (as the saying goes, necessity is the mother of inventions).
Arnaud Bertrand tweet mediaArnaud Bertrand tweet media
Artificial Analysis@ArtificialAnlys

DeepSeek’s first reasoning model has arrived - over 25x cheaper than OpenAI’s o1 Highlights from our initial benchmarking of DeepSeek R1: ➤ Trades blows with OpenAI’s o1 across our eval suite to score the second highest in Artificial Analysis Quality Index ever ➤ Priced on DeepSeek’s own API at just $0.55/$2.19 input/output - significantly cheaper than not just o1 but o1-mini ➤ Served by DeepSeek at 71 output tokens/s (comparable to DeepSeek V3) ➤ Reasoning tokens are wrapped in tags, allowing developers to easily decide whether to show them to users Stay tuned for more detail coming next week - big upgrades to the Artificial Analysis eval suite launching soon.

English
194
1.3K
7.7K
1.7M