Salesforce

114.7K posts

Salesforce banner
Salesforce

Salesforce

@salesforce

We're the #1 AI CRM—where humans with agents drive customer success together with AI, data, and Customer 360 apps on one platform. Tweet @AskSalesforce for help

San Francisco, CA Katılım Nisan 2009
1 Takip Edilen580.8K Takipçiler
Salesforce
Salesforce@salesforce·
10,000 calls per day used to mean hours of manual notes for @PenFed teams. ⚡ Powered by 713K+ LLM calls a month, Agentforce generates summaries automatically, saving 30,000 minutes a day and $3M a year. Watch how: x.com/i/broadcasts/1…
English
0
1
3
881
Salesforce
Salesforce@salesforce·
👎"per my last email" 👎meetings about meetings 👎AI with goldfish memory 👎PTO → 3-day catch-up spiral 👎⌘+F on FAQ pages 👎47 tabs open 👎"slow to respond" 👎AI trained on the internet, not your company 👍Slackbot 👍👍Already in @SlackHQ 👍👍👍Already knows what’s up
English
4
12
27
6.3K
Salesforce
Salesforce@salesforce·
Can we throw some time on your calendar, New York?
Salesforce tweet media
English
1
21
18
5K
Salesforce
Salesforce@salesforce·
🫡 Public service just got agentic. Hear from leaders putting AI to work where it matters most at Agentforce Tour D.C. • Devin L. Qualls — @USPS • Lindsey Knight — @BlueStarFamily • George Jungbluth — @NWS • Secretary Carlos Del Toro — @USNavy Join us IRL or online. 👇
English
1
7
13
4K
Salesforce
Salesforce@salesforce·
In a competitive fitness market, @Equinox provides high-end luxury at scale. With Agentforce, their clients get: ⚡️ A 24/7 digital concierge 🗓️ Streamlined class bookings
English
2
13
20
3.2K
Salesforce
Salesforce@salesforce·
70% of a seller’s week = admin. Womp. ☹️
 The 🆕 Agentforce Sales gives that time back—automating CRM updates, meeting prep, and follow-ups so reps can spend 100% of their time focused on closing. Watch the full demo to see how deals move faster: bit.ly/47acGNj
English
2
3
20
2.9K
Salesforce
Salesforce@salesforce·
The 🆕 Agentforce Sales hunts, ranks, and enriches leads around the clock—turning prospects into opportunities. No more manual data entry. Just a calendar full of booked meetings. Now that’s pipeline generation at scale.
English
4
8
21
3.2K
Salesforce
Salesforce@salesforce·
🔴LIVE: See the latest from Agentforce Sales. • Pipeline Generation • Deal Acceleration • Revenue Optimization Watch how agents work across the entire sales cycle to help you scale revenue. x.com/i/broadcasts/1…
English
4
41
13
6.9K
Salesforce
Salesforce@salesforce·
Real engineering, worth the excitement. Let's just be precise about what it is. BitNet's ternary quantization is genuinely clever. 1.58-bit weights enabling CPU inference at scale is a meaningful milestone. The energy efficiency gains and on-device potential are real, and MIT-licensed Microsoft Research backing means this will have legs. That said, "accuracy barely moves" deserves a closer look. Competitive benchmarks at the same parameter count aren't the same as matching dense-precision quality. Quantization trades representational fidelity for efficiency. That's the honest deal. And 5-7 tokens/second works great for personal, interactive use. For agentic or batch workloads, you'll feel the ceiling quickly. What BitNet genuinely unlocks: on-device AI, offline deployment, edge hardware, emerging markets. That's a big, legitimate story worth telling loudly.What it doesn't do yet: displace cloud inference or replace fine-tuned precision models at scale. Watch the ecosystem. The benchmark sheet is just the beginning.
English
0
0
0
80
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
881
2.7K
15.4K
2.2M
Salesforce
Salesforce@salesforce·
How much of your sales team’s week is spent on admin vs actual selling?
English
1
2
2
2.4K
Salesforce
Salesforce@salesforce·
See what's new in Agentforce Sales. Streaming live on 𝕏 today at 9 a.m. PT with demos, customers, and hands-on training. x.com/i/broadcasts/1…
English
1
1
6
1.5K
Salesforce
Salesforce@salesforce·
T-shirts on golf courses? Bogey. 🙅 Forcing a playoff and winning on the 18th hole? Birdie. ✅ And so is Agentforce and @livgolf_league transforming broadcasts, operations, and fan experiences with Fan Caddie.
English
1
0
8
2.5K