a2go

488 posts

a2go banner
a2go

a2go

@AnalyticsToGo

Agentic AI for Supply Chain Intelligence. LinkedIn- a2go

Sarasota, FL Katılım Mart 2018
191 Takip Edilen77 Takipçiler
a2go
a2go@AnalyticsToGo·
Every ERP modernization conversation right now ends in the same question: where do AI agents actually fit? Bolted onto the ERP roadmap? A separate platform? An overlay? And how do you get value in quarters, not years? This webinar cuts through the noise for executives evaluating AI on top of Epicor and other supply chain ERPs. We'll unpack the "decision execution gap" — why even strong ERPs leave hours and days between insight and action — and walk through how agentic AI closes it without rip-and-replace. You'll leave with a clearer framework for what to build, what to buy, and what to expect from your team and your stack over the next 12 months. Reserve Your Spot: elevatiq.com/events-and-web… #Databricks #OTIF #DemandForecasting #InventoryOptimization #CIO #COO #CSCO #CDO #SupplyChainLeaders
English
0
0
0
9
a2go
a2go@AnalyticsToGo·
Sometimes webinars disappoint. Been there. But if you are confused, frustrated, and/or looking for information and answers on how to get and use agentic AI in your supply chain operations (without ripping anything out!), register for this one. You’ll hear about a real, deployed use case where multiple ERPs stayed where they were but were outfitted with an agentic AI layer that brought them all together. You won’t believe that finally someone has figured out how to deliver supply chain AI that is focused on improving financial outcomes, not simply checking the box. elevatiq.com/events-and-web…
SupplyChainBrain@SCBrain

Read Now: ow.ly/kUXF50YVRhO Supply chains are built to respond fast, but what happens when uncertainty slows everything down? Weak links are starting to show, and the consequences are real. Take a closer look at what needs fixing in this article. #supplychainbrain

English
0
0
0
9
a2go
a2go@AnalyticsToGo·
A2go.ai's ADIP platfom. Agentic AI for supply chain operations. Non-disruptive, keep your existing systems, enter your guardrails and business rules, hit go. Built natively on Databricks. The New supply chain operating system. gemini.google.com/share/985ad809…
English
0
0
0
53
a2go retweetledi
Jaynit
Jaynit@jaynitx·
In the 1920s, a Stanford psychologist tracked genius children for 50 years. Malcolm Gladwell breaks down what he discovered: Rich families → successful. Poor families → failures. Not average. Failures. Genius-level IQs that produced nothing. He spent 60 minutes at Microsoft explaining why we're wrong about success: The psychologist was named Terman. He gave IQ tests to 250,000 California schoolchildren. He identified the top 0.1%. Kids with IQs of 140 and above. His hypothesis: these children would become the leaders of academia, industry, and politics. He tracked them. And tracked them. For decades. The results split into three groups. The top 15% achieved real prominence. The middle group had average, moderately successful professional lives. And the bottom group? By any measure, failures. The difference wasn't personality. Wasn't habits. Wasn't work ethic. It was simple: the successful geniuses came from wealthy households. The failures came from poor families. Poverty is such a powerful constraint that it can reduce a one-in-a-billion brain to a lifetime of worse than mediocrity. There's a concept called "capitalization rate." It asks a simple question: what percentage of people who are capable of doing something actually end up doing that thing? In inner city Memphis, only 1 in 6 kids with athletic scholarships actually go to college. If our capitalization rate for sports in the inner city is 16%, imagine how low it must be for everything else. Here's something stranger. Gladwell read the birth dates of the 2007 Czech Junior Hockey Team: January 3rd. January 3rd. January 12th. February 8th. February 10th. February 17th. February 20th. February 24th. March 5th. March 10th. March 26th... 11 of the 20 players were born in January, February, or March. This isn't unique to the Czechs. Every elite hockey team in the world shows the same pattern. Every elite soccer team too. Why? The eligibility cutoff for youth leagues is January 1st. When you're 10 years old, a kid born in January has 10 months of maturity on a kid born in October. That's 3 or 4 inches of height. The difference between clumsy and coordinated. So we look at a group of 10 year olds, pick the "best" ones, give them special coaching, extra practice, more games. We think we're identifying talent. We're just identifying the oldest. Then we give the oldest more opportunities, and 10 years later they really are the best. Self-fulfilling prophecy. The capitalization rate for hockey talent born in the second half of the year? Close to zero. We're leaving half of all potential hockey players on the table because of an arbitrary date on a calendar. Kids born in the youngest cohort of their school class are 11% less likely to go to college. 11% of human potential squandered because we organize elementary school without reference to biological maturity. Now here's the part about math. Asian kids dramatically outperform Western kids in mathematics. The gap is enormous and consistent across decades of testing. Some people say it's genetic. It's not. It's attitudinal. When Asian kids face a math problem, they believe effort will solve it. When Western kids face a math problem, they believe the answer depends on innate ability they either have or don't. Here's the proof. The international math tests include a 120-question survey. It asks about study habits, parental support, attitudes. It's so long most kids don't finish it. A researcher named Erling Boe decided to rank countries by what percentage of survey questions their kids completed. Then he compared it to the ranking of countries by math performance. The correlation was 0.98. In the history of social science, there has never been a correlation that high. If you want to know how good a country is at math, you don't need to ask any math questions. Just make kids sit down and focus on a task for an extended period of time. If they can do it, they're good at math. Why do Asian cultures have this attitude? Gladwell's theory: rice farming. His European ancestors in medieval England worked about 1,000 hours a year. Dawn to noon, five days a week. Winters off. Lots of holidays. A peasant in South China or Japan in the same period worked 3,000 hours a year. Rice farming isn't just harder than wheat farming. It's a completely different relationship with work. There's a Chinese proverb: "A man who works dawn to dusk 360 days a year will not go hungry." His English ancestors would have said: "A man who works 175 days a year, dawn to 11, may or may not be hungry." If your culture does that for a thousand years, it becomes part of your makeup. When your kids sit down to face a calculus problem, that legacy of persistence translates perfectly. Now consider distance running. In Kenya, there are roughly a million schoolboys between 10 and 17 running 10 to 12 miles a day. In the United States, that number is probably 5,000. Our capitalization rate for distance running is less than 1%. Kenya's is probably 95%. The difference isn't genetic. The difference is what the culture values and where it spends its attention. Here's the most fascinating finding. 30% of American entrepreneurs have been diagnosed with a profound learning disability. Richard Branson is dyslexic. Charles Schwab is dyslexic. John Chambers can barely read his own email. This isn't coincidence. Their entrepreneurialism is a direct function of their disability. How do you succeed if you can't read or write from early childhood? You learn to delegate. You become a great oral communicator. You become a problem solver because your entire life is one big problem. You learn to lead. 80% of dyslexic entrepreneurs were captain of a high school sports team. Versus 30% of non-dyslexic entrepreneurs. By the time they enter the real world, they've spent their whole life practicing the four skills at the core of entrepreneurial success: delegation, oral communication, problem solving, and leadership. Ask them what role dyslexia played in their success and they don't say it was an obstacle. They say it's the reason they succeeded. A disadvantage that became an advantage. Here's what Gladwell wants you to understand: When we see differences in success, our default explanation is differences in ability. We forget how much poverty, stupidity, and attitude constrain what people can become. We refuse to admit that our own arbitrary rules are leaving talent on the table. We cling to naive beliefs that our meritocracies are fair. The capitalization argument is liberating. It says you don't look at a struggling group and conclude they're incapable. It says problems that look genetic or innate are often just failures of exploitation. It says we can make a profound difference in how well people turn out. If we choose to pay attention. This 60 minute Microsoft talk will teach you more about success than every self-help book you've ever read combined. Bookmark this & give it an hour today, no matter what.
English
395
2.1K
7.8K
1.6M
a2go retweetledi
Yasir Ai
Yasir Ai@AiwithYasir·
This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly: Can LLMs actually discover science, or are they just good at talking about it? The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead of asking models trivia questions, it tests something much harder: Can models form hypotheses, design experiments, interpret results, and update beliefs like real scientists? Here’s what the authors did differently 👇 • They evaluate LLMs across the full discovery loop hypothesis → experiment → observation → revision • Tasks span biology, chemistry, and physics, not toy puzzles • Models must work with incomplete data, noisy results, and false leads • Success is measured by scientific progress, not fluency or confidence What they found is sobering. LLMs are decent at suggesting hypotheses, but brittle at everything that follows. ✓ They overfit to surface patterns ✓ They struggle to abandon bad hypotheses even when evidence contradicts them ✓ They confuse correlation for causation ✓ They hallucinate explanations when experiments fail ✓ They optimize for plausibility, not truth Most striking result: `High benchmark scores do not correlate with scientific discovery ability.` Some top models that dominate standard reasoning tests completely fail when forced to run iterative experiments and update theories. Why this matters: Real science is not one-shot reasoning. It’s feedback, failure, revision, and restraint. LLMs today: • Talk like scientists • Write like scientists • But don’t think like scientists yet The paper’s core takeaway: Scientific intelligence is not language intelligence. It requires memory, hypothesis tracking, causal reasoning, and the ability to say “I was wrong.” Until models can reliably do that, claims about “AI scientists” are mostly premature. This paper doesn’t hype AI. It defines the gap we still need to close. And that’s exactly why it’s important.
Yasir Ai tweet media
English
89
218
563
39.5K
Bojan Radojicic
Bojan Radojicic@BojanRadojici10·
360 years. That is collective Excel experience of my team of 30 people, in one room. I have personally used Excel for 20 years. Since the very beginning. We’ve spent decades "crushing it" when it comes to financial modeling. We knew every shortcut. Every nested formula. We thought we had reached the peak of efficiency. (They are better then me, just to admit) But I have something to tell you. The game just changed. In my opinion, we are witnessing the biggest innovation since Excel was first released. It’s not a new function or a Power BI update. It’s Claude. Specifically, Claude’s ability to build and manipulate Excel models. For 40 years, the "manual labor" was the tax we paid. Hardcoding formulas. Spending hours formatting cells. Manually linking sheets and building tables from scratch. That era is over. Claude can now handle the heavy lifting of building the structure, the logic, and the formatting in minutes. But here is the part that really surprised me: It actually understands accounting. It understands the relationship between a Balance Sheet and a Cash Flow statement. It understands how operating drivers flow into a P&L. We aren't replacing our expertise. We are finally liberating it. Instead of spending 80% of our time building the model, we spend 100% of our time analyzing the results. If you want this Prompt and Excel model, just drop a comment and I’ll send it to you. (Important: follow me so I can DM you!)
Bojan Radojicic tweet media
English
713
236
1.3K
122K
NIJ Ruvos
NIJ Ruvos@nahidulislam404·
Everyone is using Claude. Only 1% are actually leveraging it. The difference isn’t access. It’s the prompt. I tested 1000+ prompts. Refined them. Broke them. Rebuilt them. That’s how I moved into that 1%. Today I generate $2,000–$4,000/month Just by giving Claude better instructions. Most people blame the AI. The real problem is vague prompts. So I’m giving away my Top 21 Claude Mega Prompts — copy-paste version. Universal. Tested. Built for real results. If you know how to instruct AI properly, you gain unfair leverage. How to get it: • Follow (so I can DM you) • Comment “prompt” • Like + Retweet Miss a step = no access
NIJ Ruvos tweet media
NIJ Ruvos@nahidulislam404

APOB goes beyond simple AI face swaps — it helps turn ideas into shareable, high-impact content. From memes to educational videos, one swap can spark real reach. Start here 👉 mega.apob.ai/NIJ-Ruvos

English
92
66
164
12.8K
a2go
a2go@AnalyticsToGo·
Accenture says AI can cut supply chain costs 20%. Most companies won't get there — not because the technology doesn't work, but because of how they deploy it. Three things consistently separate the companies hitting those benchmarks from the ones still running pilots that go nowhere: 1️⃣ They didn't wait to "clean their data first." Governance and AI capability arrived together. 2️⃣ They started specific. One high-impact pain point, proved in weeks, then expanded. Accenture calls it the "self-funding supply chain" — savings fund the next phase. 3️⃣ Their AI acts. Not just reports. ERP AI shortens the time to insight. Agentic AI shortens the time to action. If humans are still making every decision at the same speed, you haven't changed the cost structure — you've made reporting more expensive. ─── A leading industrial manufacturer deployed A2go on top of their existing systems. → 30% inventory cost reduction → 25% cash cycle improvement → Master scheduling: 18 hours → 15 minutes That 30% matches Accenture's top benchmark. Not luck. Design. ─── The research describes the destination. The road is built from unified data, agents that act within guardrails, and governance that keeps humans in the lead. It's available now — not on a roadmap. Contact us at info@a2go.ai Mike Romeri, CEO, lnkd.in/g7SynzrU Cesar Oliveira, COO, lnkd.in/gTuz4HSd Subscribe to A2go Newsletter on LinkedIn lnkd.in/g7e-FxNf Link to Accenture research: lnkd.in/gSwPmNxE hashtag#SupplyChain hashtag#AI hashtag#AgenticAI hashtag#Manufacturing hashtag#SupplyChainAI
a2go tweet media
English
0
0
0
30
a2go retweetledi
Chris Laub
Chris Laub@ChrisLaubAI·
🚨 Anthropic just dropped 12 FREE AI courses that make most “AI degrees” look outdated. They quietly dropped these courses that teach you how to actually build with Claude in 2026: • Make real API calls and ship tool-using agents • Build and deploy full RAG pipelines • Connect models to live tools and data with MCP • Spin up production-grade MCP servers with logs + scaling • Run Claude inside Amazon Bedrock and Google Vertex AI • Automate dev work from the CLI with Claude Code • Integrate GitHub, workflows, prompt scoring, multi-turn agents This is the stack serious builders are learning while everyone else is still arguing about prompts. If you’re not learning agent workflows and Model Context Protocol this year, you’re already behind.
Chris Laub tweet media
English
31
275
2K
147.9K
a2go retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
Anthropic's own researchers just proved that using AI to learn new skills makes you 17% worse at them. and the part nobody's reading is more important than the headline. the paper is called "How AI Impacts Skill Formation." randomized experiment. 52 professional developers. real coding tasks with a Python library none of them had used before. half got an AI assistant. half didn't. the AI group scored 17% lower on the skills evaluation. Cohen's d of 0.738, p=0.010. that's a real effect. and here's what makes it sting: the AI group wasn't even faster. no significant speed improvement. they learned less AND didn't save time. but the viral framing of "AI bad for learning" misses what actually matters in this paper. the researchers watched screen recordings of every single participant. they identified 6 distinct patterns of how people use AI when learning something new. 3 of those patterns preserved learning. 3 destroyed it. the gap between them is enormous. participants who only asked AI conceptual questions scored 86% on the evaluation. participants who delegated everything to AI scored 24%. same tool. same task. same time limit. the difference was cognitive engagement. the highest-scoring AI users actually outperformed some of the no-AI group. they asked "why does this work" instead of "write this for me." they generated code then asked follow-up questions to understand it. they used AI as a thinking partner, not a replacement for thinking. the lowest-scoring group did what most people do under deadline pressure: pasted the prompt, copied the output, moved on. they finished fastest. they learned almost nothing. and here's the finding that should concern every engineering manager alive: the biggest score gap was on debugging questions. the skill you need most when supervising AI-generated code is the exact skill that atrophies fastest when you let AI do the work. the control group made more errors during the task. they hit bugs. they struggled with async concepts. they got frustrated. and that struggle is precisely what built their understanding. errors aren't obstacles to learning. they ARE learning. removing them with AI removes the mechanism that creates competence. participants in the AI group literally said afterward they wished they'd "paid more attention" and felt "lazy." one wrote "there are still a lot of gaps in my understanding." they could feel the hollowness of having completed something without understanding it. that's not a productivity win. that's debt. this paper isn't an argument against using AI. it's an argument against using AI unconsciously. Anthropic publishing research showing their own product can inhibit skill formation is the kind of intellectual honesty the industry needs more of. the practical takeaway is simple: if you're learning something new, use AI to ask questions, not to skip the work. the struggle is the product.
Alex Prompter tweet media
English
174
747
3K
195.3K
a2go
a2go@AnalyticsToGo·
Great article! The one thing that often makes me pause is the comment (made by many) that AI can’t sit on top of workflows, it has to be part of workflows. It implies that you must re-engineer workflows in order to benefit from AI. For those who do not read further, it can imply that an AI First strategy for companies or Agentic AI systems must replace what is already done. It does not mean this and further explanation is needed. AI can sit atop existing systems, even agentic AI systems. We aren’t talking just LLMs or GenAI, I mean truly agentic systems that automate, monitor, make decisions, collaborate with humans and systems of agents…it is all about how it is connected/integrated. AI Will Not Deliver Enterprise Value Until We Let It Act insideainews.com/2026/01/08/ai-…
English
0
0
0
24
a2go
a2go@AnalyticsToGo·
𝗖-𝗹𝗲𝘃𝗲𝗹 𝗲𝗻𝘁𝗵𝘂𝘀𝗶𝗮𝘀𝗺 𝗴𝗲𝘁𝘀 𝗔𝗜 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀 𝘀𝘁𝗮𝗿𝘁𝗲𝗱. 𝗣𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝘃𝗲 𝗼𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 𝗶𝘀 𝘄𝗵𝗮𝘁 𝗴𝗲𝘁𝘀 𝘁𝗵𝗲𝗺 𝗳𝗶𝗻𝗶𝘀𝗵𝗲𝗱. The most successful AI and data initiatives don’t just start with the C‑suite—they stay there. Sustained executive sponsorship is one of the strongest predictors of success, with studies showing organizations with active C‑level backing achieve up to 2.5x higher ROI on AI programs compared to peers without it. 𝗔𝘁 𝗔𝟮𝗴𝗼.𝗮𝗶, 𝘄𝗲 𝘀𝗲𝗲 𝗮 𝗰𝗹𝗲𝗮𝗿 𝗽𝗮𝘁𝘁𝗲𝗿𝗻: - C‑level leaders frame the “why” and “where we’re going” for AI and data, turning disconnected projects into a strategic roadmap. - They protect focus across quarters so initiatives don’t stall after the pilot phase. - They create the conditions (governance, funding, talent) for AI agents and data foundations to scale across plants, DCs, and business units. 𝗢𝗻𝗰𝗲 𝘁𝗵𝗮𝘁 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗶𝗻𝘁𝗲𝗻𝘁 𝗶𝘀 𝗶𝗻 𝗽𝗹𝗮𝗰𝗲, 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝘄𝗼𝗿𝗸𝗲𝗿𝘀 𝗯𝗲𝗰𝗼𝗺𝗲 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝘁𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘄𝗮𝗿𝗿𝗶𝗼𝗿𝘀 𝗳𝗼𝗿 𝗔𝗜. 𝗧𝗵𝗲𝘆 𝗮𝗿𝗲 𝗰𝗹𝗼𝘀𝗲𝘀𝘁 𝘁𝗼 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀 𝗮𝗻𝗱 𝗲𝘅𝗰𝗲𝗽𝘁𝗶𝗼𝗻𝘀, 𝗮𝗻𝗱 𝘁𝗵𝗲𝘆 𝗸𝗻𝗼𝘄: - Where they’re drowning in manual work that could be automated safely. - When they need AI agents to bring them the most current, relevant information and recommended actions—ideally with a clear view of financial outcomes like margin, working capital, and service risk. - Why certain tasks are so complex that they’d benefit from a conversational partner to walk through scenarios, constraints, and trade‑offs step by step. 𝗜𝗻 𝘀𝘂𝗽𝗽𝗹𝘆 𝗰𝗵𝗮𝗶𝗻 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀, 𝘁𝗵𝗲 𝗹𝗼𝘂𝗱𝗲𝘀𝘁 𝗽𝗮𝗶𝗻 𝗽𝗼𝗶𝗻𝘁 𝘄𝗲 𝗵𝗲𝗮𝗿 𝗶𝘀 𝘀𝘁𝗶𝗹𝗹 𝗱𝗮𝘁𝗮. Teams are told they need a “data lake” or a “single source of truth,” but choosing what will work now and still be viable in three years can feel paralyzing. Third‑party platforms and providers can help, but they only succeed when they involve your planners, schedulers, buyers, and supply chain analysts early and often—alongside sustained C‑level air cover to keep the effort strategic, funded, and moving. 𝗧𝗵𝗮𝘁’𝘀 𝗵𝗼𝘄 𝘆𝗼𝘂 𝘁𝘂𝗿𝗻 “𝘄𝗲 𝗻𝗲𝗲𝗱 𝗔𝗜” 𝗶𝗻𝘁𝗼 𝗮 𝗹𝗶𝘃𝗶𝗻𝗴 𝗿𝗼𝗮𝗱𝗺𝗮𝗽: - C‑suite defines the outcomes and guardrails. - Knowledge workers define where, when, and why AI should plug into the flow of work. - AI agents, data foundations, and partners like A2go.ai align around both, so every initiative is both strategically sponsored and operationally real. Contact A2go.ai for strategic and tactical advice for your data needs and supply chain AI. info@a2go.ai
English
0
0
0
9