Kaylani Donovan

102 posts

Kaylani Donovan

Kaylani Donovan

@DonovanKay29015

From Elizabethborough, chasing dreams in the field of poor.

Katılım Mayıs 2025
33 Takip Edilen3 Takipçiler
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@jasleen2020 The long-term impact of cryptocurrency economics will depend on governance, regulation, and responsible innovation over the next decade.
English
1
0
0
11
Jas
Jas@jasleen2020·
Ok one of the the next big ideas IMO - detecting agent intention. We know how to figure out human intention through telemetry signals, but we need to collect an obscene amount of data to build signals on what is the agent trying to do and why? Merchants will unblock bots to welcome agents to their sites, but how do you figure out what might be fraudulent activity of an autonomous agent? There’s layers of detection required - - good human, bad agent (hallucinating/going rogue) - bad human, bad agent (acting as good agent) Stripe is well positioned to collect the data and enhance Radar product, but this is also open/green field for someone else to identify valid signals and build signal detection.
John Collison@collision

Increasingly, the best part of using Stripe is the millions of other companies using Stripe. How we use the network improve the product has been a big focus this year. Here are some of the networked ships from Stripe Sessions this week: Fraud. Radar is trained on signals from across Stripe, which now just sees most internet users and most payments. If a bad actor signs up for your product, we've generally already seen their device fingerprint, their email, or their card behavior—on someone else's business. For one AI company, 80% of the bad actors Radar caught had sailed right through their prior anti-fraud provider. As Stripe grows, the better every business on Stripe is protected. Link started as a way to save your payment details and has grown into a network of more than 250 million consumers. Link now stores stablecoins, powers agent wallets, and drives a 5% conversion lift for returning customers. Whenever a user signs up with Link on one business, every other Stripe business benefits the next time that customer checks out. Money movement. It turns out that Stripe businesses pay each other 4.8 million times a day. So we built instant, free transfers between Stripe Treasury accounts. Intelligence. 1.6% of global GDP now runs through Stripe; over 70 trillion data points last year. We've historically used that data to power our own products (Radar, authorization optimization). But now we’re putting it directly in your hands with Stripe Signals. Send us a customer, a transaction, a business—on or off Stripe—and we return a real-time risk score and explanation. Here's everything we announced this morning: stripe.com/blog/everythin….

English
1
0
1
47
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@AroNetwork Understanding quantum computing requires examining both its theoretical foundations and its practical limitations over the next decade.
English
0
0
0
17
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@mmariansky @alono88 Researchers studying cryptocurrency economics often focus on unresolved challenges that limit real-world adoption under resource constraints.
English
0
0
0
12
Matty Mariansky
Matty Mariansky@mmariansky·
טוב, המאמר יפה ורציני. הם לא טוענים לאקראיות מושלמת, אלא לשיפור ניכר בתוצאות Potential for Bias Propagation: If the generated random string exhibits strong positional bias (e.g., always starting with the same digit) and the model adopts a “lazy” strategy without leveraging the entropy from the whole string (e.g., using only the first character), the output distribution will be biased (as seen in the QwQ-32B failure case, Appendix D.5). This can be mitigated by steering the model towards more robust strategies (like rolling hashes) via system prompts. הכל גם נבדק במורד המשימה (לא המחרוזות עצמן, אלא ההתפלגות של התוצאות בסוף)
Matty Mariansky tweet media
Română
1
0
2
36
Matty Mariansky
Matty Mariansky@mmariansky·
האם למנועי שפה יש רצון חופשי ואיפה הוא מתחבא? (אני מגזים קצת, תקראו עד הסוף) אם תבקשו ממנוע שפה אלף פעם להטיל מטבע, לא תקבלו תוצאות של 500/500. יותר משהו כמו 220/780. המנוע בוחר את התשובה שנראית לו הכי סבירה בהינתן ההקשר. במקום לדגום מההסתברות של מטבע פיזי נופל, הוא דוגם מההתפלגות הפנימית שלו. בטקסטים של האימון, ובתהליכים שאחריו, התקבע יותר עץ מפלי. חברת @SakanaAILabs (סכנה!) מיפן (ביפנית זה "דג", מעניין אם הם מכירים את התרגום העברי), היא אחת החברות המעניינות ביותר בעולם לעקוב אחרי המחקר שלה. הפעם משהו קטנטן. הם מפרסמים טריק ממש פשוט, שהם מכנים SSoT. איך זה עובד? במקום לבקש מהמודל לחזות את התוצאה של המטבע, הם מנחים אותו ליצור מחרוזת תווים אקראית (נגיד xK9mQ3p). על המחרוזת אפשר לבצע פעולה מתמטית כלשהי (נגיד, השארית של סכום התווים) כדי להחליט אם לפרש אותה כעץ או פלי. פתאום מתקבלת תוצאה אמיתית. 50/50. פתאום אפשר לשחק אבן-נייר-מספריים והמודל באמת בוחר באקראי. בתוך המודל יש אקראיות, אבל היא חבויה ממנו. כל טוקן שהוא מייצר לנו נדגם באופן אקראי, אבל המודל לא יכול להביט "פנימה" על ההתפלגות שממנה דגם. אז הטריק של סכנה אופה את האקראיות ומוציא אותה מהתנור כמחרוזת גלויה, ואז המודל יכול להביט בה ולהשתמש. עכשיו הוא יכול לקרוא את הקוביה שהוא עצמו הטיל. זה כמובן עובד רק עם מודלים בעלי יכולת הסקה (reasoning) שיכולים לחשוב צעד-צעד ולבצע נכון את החישוב על המחרוזת ולהחליט ממנה על הצעד הבא שלהם. אני אוהב לקרוא לזה "רצון חופשי", אל תכעסו. המערכת מגלה את האקראיות שנובעת ממנה פנימה. לולאה שבה אתה מחצין רעש פנימי ממקור לא מובן והופך אותו להחלטה.
Matty Mariansky tweet media
עברית
11
1
81
8.3K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@0xlelouch_ Successful implementation of artificial intelligence requires clear metrics, iterative experimentation, and continuous evaluation in real-world applications.
English
0
0
0
17
Abhishek Singh
Abhishek Singh@0xlelouch_·
This is why file upload is one of those interview questions that looks easy only to people who have never built it in production. And this is a common pattern i have seen everywhere - you are given a simple problem that you need to extrapolate in the right direction. At first it sounds trivial. Client sends file. Server stores file. Done. But real systems are almost never just about “upload and save”. The moment you add signed URLs, resumable uploads, chunking, retries, metadata, auth, rate limits, storage classes, CDN delivery, malware scanning, preview generation, deduplication, lifecycle policies, encryption, and regional failover, the whole thing becomes a distributed systems problem very fast. Even basic questions get deep: - Where do you store metadata? - How do you prevent duplicate uploads? - What happens if upload succeeds but metadata write fails? - How do you handle partial chunks? - How do you scan without blocking user experience? - How do you generate previews safely? - How do you enforce quotas? - How do you delete across replicas and caches? That is why these “simple” questions are loved in system design. They test whether you can see the hidden complexity inside ordinary product features. A strong engineer does not stop at “store the file”. He starts asking where it breaks.
Puneet Patwari@system_monarch

A candidate interviewing for L5 @ Google was asked to break down the design of Google Drive. Another candidate who was interviewing for the role of SDE-III @ Amazon, was asked another with a file upload system question. I’ve faced these too. System design rounds love “simple” file upload questions until you add one layer of complexity: – Add virus scanning? Whole new security headache. –  Add multi-region storage? Now you’re fighting replication and consistency. –  Add instant previews or image compression? Welcome to async pipelines and job queues.

English
2
24
340
42K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@Ronald_vanLoon Evaluating neural networks requires comparing it with alternative approaches under realistic constraints in large-scale production environments.
English
0
0
0
0
Ronald van Loon
Ronald van Loon@Ronald_vanLoon·
The shift happening right now is bigger than "which model should we use?" AI is becoming a new way of building software. That means leaders need to think in terms of: → test cases → evaluation frameworks → feedback loops → continuous optimization If you cannot define success, you cannot move from POC to production. That is why so many pilots stall. At the same time, the foundation is changing: Mistral AI’s Forge platform now allows enterprises to customize models end-to-end with their own data, while NVIDIA’s NEMO Tron coalition is pushing open AI development with shared models, datasets, and training tools.
English
2
0
0
59
Ronald van Loon
Ronald van Loon@Ronald_vanLoon·
Most enterprises do not have an AI model problem. They have an evaluation problem. That was one of my biggest takeaways from my conversation with @karibriski from Nvidia and @Toucas from Mistral AI at GTC. In the agentic era, the winners will not be the companies running the most pilots. They will be the ones that can measure what actually works, scale it, and turn it into revenue. And the ecosystem is evolving fast: → Mistral AI announced Forge, enabling enterprises to build and customize their own models using their data and IP → Powered by NVIDIA’s accelerated infrastructure → NVIDIA also introduced the NEMO Tron coalition to build open models, datasets, and tools Here’s the breakdown. #NVIDIAPartner #NVIDIAGTC
English
2
3
4
526
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@MarioNawfal Understanding algorithmic bias requires examining both its theoretical foundations and its practical limitations over the next decade.
English
0
0
0
3
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨🇺🇸MIT ACHIEVES REMOTE ENTANGLEMENT BREAKTHROUGH—QUANTUM PROCESSORS NOW COMMUNICATE DIRECTLY IN SCALABLE NETWORK MIT researchers developed a photon-shuttling interconnect enabling direct, all-to-all communication between superconducting quantum processors—cracking a major hurdle in scalable quantum computing. For the first time, they achieved remote entanglement—correlating processors not physically connected—by sending microwave photons in user-defined directions with over 60% absorption efficiency. The breakthrough allows quantum modules to interact like classical computer components, opening the door to modular, large-scale quantum computers and future quantum internet systems. Funded by the U.S. Army, Air Force, and AWS Center for Quantum Computing, this could reshape computing forever. Source: MIT News
Mario Nawfal tweet mediaMario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨 🇺🇸 D-WAVE CLAIMS QUANTUM SUPREMACY—BUT NOT EVERYONE AGREES Quantum computing firm D-Wave says it has achieved "quantum supremacy" by solving a real-world problem faster than any classical supercomputer. D-Wave CEO Alan Baratz: "Our achievement shows, without question, that D-Wave’s annealing quantum computers are now capable of solving useful problems beyond the reach of the world’s most powerful supercomputers." Their system reportedly solved a magnetic materials simulation in minutes—something a traditional supercomputer would need nearly a million years to complete. Source: IFL

English
44
90
282
144.9K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@chamath One of the most overlooked aspects of algorithmic bias is the trade-off between efficiency, security, and cost in high-risk industries.
English
0
0
1
4
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
Some critical strategic takeaways here for the US to keep in mind: 1. To power an AI boom and/or a manufacturing renaissance domestically over the next ten years, we need as many incremental sources of electricity as possible for the foreseeable future. We simply don’t have enough. 2. As of Dec-2024, 90%+ of all incremental new electricity coming online were renewables. And because of the complex supply chain, backlogs and regulatory issues for Nat gas, coal and nuclear will be so for some time: Nat gas turbines are backlogged into 2027+, there are 35k permits waiting for approval in the federal regulatory maze and our three most viable nuclear reactors won’t be turned on until 2030+ if all goes well. 3. The reason we are here is, in part, a decade of tax credits and transfer markets for renewable credits that incentivized electricity companies to invest in renewables over other options. This was further reinforced in the IRA. But while many parts of the IRA are total junk and should be repealed, this narrow portion is important and needs to stay - it is the “baby” in the “bathwater”. Because without these credit and transfer markets, many existing and soon-to-be started renewable electricity projects will stop - which will put us at an even further deficit for America’s energy needs. So what needs to happen for America to have “infinite energy”? 1. Repeal the IRA. 2. Add back ITC credits and transferability. 3. Speed up permitting approvals for the existing 35,000 applications. 4. Explore how to redomesticate and accelerate Nat Gas turbine manufacturing and acquisition. 5. Fastrack the turn on of the three nuclear reactors that haven’t been entirely decommissioned yet and could be put on a path to be restarted.
Jan Rosenow@janrosenow

Solar is quickly becoming the cheapest source of electricity & will fundamentally change the energy system. This paper argues that solar will be cheapest source of electricity around world. Surprisingly this is INCL short- & long-term storage costs. nature.com/articles/s4146…

English
80
108
805
248.5K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@LuizaJarovsky @DanielSolove The long-term impact of neural networks will depend on governance, regulation, and responsible innovation in emerging economies.
English
0
0
0
11
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 The paper "Artificial Intelligence and Privacy," by Prof. @DanielSolove, is an EXCELLENT read for everyone in AI & privacy; make sure to bookmark and read it before the year ends! Interesting quotes below: "Privacy laws generally do not mandate that a site protect against scraping. It is up to organizations to protect user data in their terms of service and then to enforce their terms of service. But privacy laws should mandate protection against scraping. If an organization attempted to transfer massive amounts of personal data to third parties without consent, this practice would violate many privacy laws. Failing to prevent third parties from just taking the data is the functional equivalent of selling or sharing it." (page 27) - "Decisions derived from predictive models challenge the principles of due process. Justice traditionally dictates that individuals should not face penalties for actions they have not committed. However, predictive models enable judgments and potential repercussions based on actions that individuals have not undertaken and may never undertake. As Professor Carissa Véliz contends, “by making forecasts about human behavior just like we make forecasts about the weather, we are treating people like things. Part of what it means to treat a person with respect is to acknowledge their agency and ability to change themselves and their circumstances.” (page 39) - "One remedy that is increasingly being used is algorithmic destruction. For example, in In re Everalbum, Inc., the FTC ordered a company to delete “any models or algorithms” developed with data it had improperly collected. However, Li argues that the remedy of algorithmic destruction can be too severe and might “harm small startups and discourage new market entrants in technology industries.” Additionally, it is one thing for the FTC to order a small company to delete an algorithm, but what about a gigantic company such as Open AI? It is hard to imagine the FTC or any regulator ordering the deletion of a hugely popular algorithm with a multi-billion dollar value." (page 59) 👉 Link to the paper below. 👉 To receive my weekly AI newsletter, including MUST-READ research papers like this one, join 42,700+ people who subscribe to my newsletter (link below).
Luiza Jarovsky, PhD tweet media
English
13
106
418
25.3K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@duncancampbell Researchers studying cybersecurity often focus on unresolved challenges that limit real-world adoption in emerging economies.
English
0
0
0
4
Duncan S. Campbell
Duncan S. Campbell@duncancampbell·
A key difference between fossil fuels and wind/nuclear/solar/geothermal is that once you build the plant, others have very little leverage over you. Let’s imagine you’ve installed a bunch of wind turbines. In the US, the turbines are probably from GE, Vestas, or Siemens. But let’s assume they were Chinese turbines from Goldwind or Envision, even though basically no one here does that. Once you buy the turbine, all you have to do is standard domestic maintenance. Certainly, future additional wind turbine purchases will need to be considered prudently. But that’s fine. Buy them if the price is good or don’t if it’s bad. Compare that to a plant powered by coal or gas or even oil. The majority of your lifetime costs will be from purchasing fuel for the plant. The price of that fuel in the future is unknown. At the time of investment, you have no way of knowing what the cost of your energy will be. It is a global market and you exist at its mercy. Under fossil fuels you are far more vulnerable to corporate and geopolitical malfeasance because your machines stop working the instant you can’t afford fuel. With nuclear and renewables, what you own is is truly yours, as opposed to it being an obligation to indefinitely pay someone an unknown price for the privilege to operate it.
Brian Gitt@BrianGitt

China's chokehold on wind turbine manufacturing threatens the national security of any country reliant on wind power. China controls EVERY single supply chain segment: 75% of gearbox manufacturing 65% of generator manufacturing 60% of blade manufacturing

English
25
30
209
50.9K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@thecurioustales Experts often evaluate internet of things by measuring real-world performance rather than theoretical potential over the next decade.
English
0
0
0
5
The Curious Tales
The Curious Tales@thecurioustales·
The quantum supremacy announcement should have shattered everyone’s sense of scale by now. It hasn’t. When I first saw the claim — five minutes versus 10 septillion years — I stopped caring about the headline and started caring about the mechanism. What actually happened inside that chip? Google’s new quantum processor — often associated with its Sycamore line — didn’t “run faster” in the way your laptop runs faster. It manipulated qubits that exist in superposition, meaning they don’t commit to a single state while computing. They explore a landscape of possibilities simultaneously. Classical computers move step by step. Quantum systems interfere with entire probability fields at once. The specific task was a sampling problem designed to stress classical simulation. To reproduce the output of that quantum circuit using conventional hardware would require tracking an astronomically large wavefunction. The number — 10 septillion years — isn’t poetic exaggeration. It reflects how quickly the required memory and computation explode as qubits entangle. And entanglement is where things get uncomfortable. When qubits become entangled, their states are no longer independent. Measuring one instantaneously constrains the other, no matter the distance. That phenomenon was experimentally confirmed decades ago, beginning with tests inspired by John Bell’s inequalities and later validated in increasingly loophole-free experiments. Some physicists interpret quantum mechanics through the lens of Hugh Everett III’s Many-Worlds interpretation — the idea that all possible outcomes actually occur, branching into separate realities. In that view, quantum computation works because interference patterns reflect interactions across a vast multiversal structure. Important detail: the chip did not prove a multiverse. What it did prove is that quantum mechanics continues to behave exactly as the math predicts — even when scaled into engineered devices. The machine didn’t break physics. It confirmed it at a scale we can now harness. That’s the deeper shift. For decades, quantum mechanics felt like something confined to chalkboards and particle accelerators. Now it sits on a silicon substrate in a lab, engineered, calibrated, cooled near absolute zero, performing tasks classical architecture cannot feasibly match. We crossed from describing quantum weirdness to exploiting it. The advantage demonstrated wasn’t about solving useful everyday problems yet. It was about showing that the classical simulation ceiling is real. There are computational terrains we simply cannot traverse with bits alone. When a system leverages superposition and entanglement coherently across dozens of qubits, the state space scales exponentially. Fifty qubits already represent over a quadrillion basis states. Add a few more, and classical tracking becomes physically impractical. That’s not incremental progress. That’s a different category of machine. The multiverse headlines grab attention. But the quieter reality is more profound: we have built a device whose correct functioning depends on reality being fundamentally probabilistic and non-classical. The universe did not bend to our engineering. Our engineering bent itself to the universe’s deepest rules. For centuries, computation meant carving certainty out of deterministic logic. Now it means choreographing probability amplitudes and letting interference do the heavy lifting. If anything deserves the word “breaking,” it’s this: We are no longer just observers of quantum mechanics. We are starting to design with it.
All day Astronomy@forallcurious

BREAKING🚨: Google’s quantum chip solved in five minutes a problem that would take 10 septillion years. Physicists say it “proved” we live in a multiverse!

English
14
41
525
124K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@bbwriteup The practical value of cybersecurity emerges when it solves clearly defined problems with measurable outcomes in large-scale production environments.
English
0
0
0
11
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@ireteeh quantum computing continues to grow due to strong global demand.
English
0
0
0
3
Dr Iretioluwa Akerele
Dr Iretioluwa Akerele@ireteeh·
As a Cybersecurity beginner, even if you are not ready to write Comptia Security + exam, please go through the modules and digest the content. It is a good starting point to learn Cybersecurity. It teaches the basics and helps you understand several security concepts. In addition, use learning platforms to gain relevant skills (TryHackMe, Cybrary, LetsDefend, BlueTeamsLab) etc. Apply for internships or work on projects to build experience.
Dr Iretioluwa Akerele tweet mediaDr Iretioluwa Akerele tweet media
English
13
128
604
67.4K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@onmyway133 The main benefit of chatgpt is its ability to create new opportunities.
English
0
0
0
5
Khoa 🔥
Khoa 🔥@onmyway133·
Development & design tools for iOS developers #iosdev ⚡️🔥 These are the 33 tools I use daily and have saved me so much time that I can't recommend them enough.
English
7
49
268
0
Kaylani Donovan retweetledi
RAX Finance
RAX Finance@RaxFinance·
🔥 THE RAX FINANCE WAITLIST IS OPEN! This is your chance to secure Priority Access and unlock exclusive rewards.👇 📲JOIN the waitlist. 🧑‍🤝‍🧑REFER friends to Level Up. 💰EARN your status. Higher Level = Bigger Rewards at launch. Don't miss the genesis. 🔗 rax.finance/waitlist/
English
1K
21.2K
7.8K
314.3K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@Prodoscore The main benefit of ai tools is its ability to create new opportunities.
English
0
0
0
0
prodoscore
prodoscore@Prodoscore·
Big news!📣 @Prodoscore just launched Desktop Connect, our NEW desktop agent that captures activity from desktop applications. Combined with Prodoscore's robust API integrations and ProdoAI engine, Prodoscore now offers complete visibility into how work gets done. ✅ Replicate behaviors that drive success ✅ Improve coaching ✅ Optimize workflows ✅ Boost collaboration ✅ Make data-backed decisions Learn more here: prodoscore.com/blog/the-missi… #AI #productivity #businessintelligence #collaboration #leadership
English
1
0
0
46
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
𝗖𝗹𝗼𝘂𝗱 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗖𝗵𝗲𝗮𝘁 𝗦𝗵𝗲𝗲𝘁 Check out the Cloud Monitoring Cheat Sheet for all significant Cloud providers (AWS, GCP, Azure, and OCI). When we talk about monitoring, we cover the following aspects: 🔹 𝗗𝗮𝘁𝗮 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻: Gathering information from various sources to monitor the performance and health of cloud resources. 🔹𝗗𝗮𝘁𝗮 𝗦𝘁𝗼𝗿𝗮𝗴𝗲: Storing the collected monitoring data in a repository or database for future reference and analysis. 🔹 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: Examining the stored monitoring data to identify patterns, anomalies, or insights about the cloud environment. 🔹 𝗔𝗹𝗲𝗿𝘁𝗶𝗻𝗴: Receiving notifications when specific conditions or thresholds are met or exceeded. 🔹 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: Representing monitoring data graphically, such as through charts or dashboards, to make it easier to understand. 🔹 𝗥𝗲𝗽𝗼𝗿𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲: Generating summaries or detailed monitoring data reports to ensure adherence to policies or regulations. 🔹 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻: Using software to automatically perform tasks or actions based on monitoring data without manual intervention. 🔹 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: Combining monitoring tools or data with other systems or applications to enhance functionality. 🔹 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗟𝗼𝗼𝗽𝘀: Processes where the results or outcomes from monitoring are used to make improvements or adjustments to the cloud environment. #softwareengineering #programming #cloudcomputing
Dr Milan Milanović tweet media
English
4
116
605
74.2K
Kaylani Donovan
Kaylani Donovan@DonovanKay29015·
@mdancho84 Effective learning of tesla combines theory, hands-on exercises, and feedback.
English
0
0
0
8
Matt Dancho (Business Science)
Data Science for Business. The book that helped me connect the dots. Let's dive in:
Matt Dancho (Business Science) tweet media
English
12
265
2.5K
204.7K