Harrison Painter 리트윗함
Harrison Painter
51.2K posts

Harrison Painter 리트윗함

Our security bug bounty program is now public on HackerOne.
We've run the program privately within the security research community, and their findings have strengthened our products. Now anyone can report vulnerabilities and get rewarded.
Read more: hackerone.com/anthropic
English
Harrison Painter 리트윗함

Claude Code can now connect to financial datasets in seconds.
Prompt for financial research, company data, access to 17,000+ stocks, and much more.
Here's how to connect in just 60 seconds:
1. Open Claude Code and paste:
"claude mcp add --transport http financial-datasets https:// mcp. financialdatasets. ai/"
2. Authenticate
Type:
"/mcp" inside Claude Code and complete the OAuth flow in your browser.
You can verify the server is connected at any time:
"claude mcp list"
2. Start prompting
Example prompts:
“What is Apple’s current P/E ratio and market cap?”
“Show me Tesla’s income statement for the last 4 quarters.”
“How has Bitcoin’s price changed over the past year?”
3. Help
If you run into errors, just ask Claude to help you by scanning the official docs here: #claude-code" target="_blank" rel="nofollow noopener">docs.financialdatasets.ai/mcp-server#cla…
English
Harrison Painter 리트윗함
Harrison Painter 리트윗함
Harrison Painter 리트윗함

Code with Claude is happening now!
▪︎ 9:00AM - Keynote
▪︎ 10:30AM - What's new in Claude Code
▪︎ 11:15AM - Building on Claude at GitHub scale
▪︎ 12:00PM - Get to production faster with Managed Agents
All times PT. twitter.com/i/broadcasts/1…
English
Harrison Painter 리트윗함
Harrison Painter 리트윗함

Introducing SubQ - a major breakthrough in LLM intelligence.
It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA),
And the first frontier model with a 12 million token context window which is:
- 52x faster than FlashAttention at 1MM tokens
- Less than 5% the cost of Opus
Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention).
Only a small fraction actually matter.
@subquadratic finds and focuses only on the ones that do.
That's nearly 1,000x less compute and a new way for LLMs to scale.
English
Harrison Painter 리트윗함
Harrison Painter 리트윗함
Harrison Painter 리트윗함
Harrison Painter 리트윗함








