Arjen Vrielink

10K posts

Arjen Vrielink banner
Arjen Vrielink

Arjen Vrielink

@arjenvrielink

Owl at @satelligenceEO, RemoteSensing, OpenSource, theSwarm, Containers, Patterns. “Bacteria that turns shit into fuel” - the Mouth of Joe.

Utrecht, Netherlands Katılım Haziran 2008
826 Takip Edilen1.6K Takipçiler
Sabitlenmiş Tweet
Arjen Vrielink
Arjen Vrielink@arjenvrielink·
The result of no more #sesamestreet on Dutch public TV: My 3y/old daughter: "what's that banana?". "That's not a banana, that is Bert."
Arjen Vrielink tweet media
English
1
0
7
0
Arjen Vrielink retweetledi
Yohan
Yohan@yohaniddawela·
Studying the Earth involves more time downloading files than analysing them. The European Space Agency holds 90 petabytes of planetary data, and they just fundamentally changed how anyone interacts with it. For years, working with Copernicus Sentinel data meant pulling massive files from SAFE archives. You had to store them locally, install mission-specific software, and navigate formats built before cloud computing existed. If you wanted to check a single scene for cloud cover, you couldn't just glance at it. You paid a massive upfront cost in time and hard drive space. That era is now over. ESA's transitioning Sentinel data to Zarr, a cloud-native format that treats data as an API instead of a static file. So ESA has now launched the EOPF Sentinel Zarr Explorer. Everything happens directly from cloud storage. You don't download a single megabyte of raw data to your machine. The workflow starts with discovery. The platform uses STAC, meaning you browse the massive catalogue using open community standards. You locate the exact coordinates and timeframes you need instantly. Then you look at the data. You can load a Sentinel-1 radar or Sentinel-2 optical scene right in your web browser. Analysis happens in the exact same environment through openEO Studio. You write Python code in your browser, define a processing graph, and execute it. A researcher can track algal blooms in the Venice Lagoon by computing a Normalised Difference Chlorophyll Index, and the result appears instantly as an interactive map. The barrier between a hypothesis and a working environmental analysis is now just a few lines of code. The developers actively avoided building a walled garden. They collaborated directly with the wider community to establish modular geospatial conventions. Because they built on open standards, desktop tools like QGIS and libraries like GDAL can read the exact same data without any proprietary plugins. Anyone with a web browser can now run analyses that used to require a dedicated computational lab.
Yohan tweet media
English
6
35
248
10.9K
Arjen Vrielink
Arjen Vrielink@arjenvrielink·
Het is een kwestie van tijd …
Arjen Vrielink tweet media
Nederlands
0
0
0
52
Arjen Vrielink retweetledi
World Resources Institute
World Resources Institute@WorldResources·
🚜🌾New maps developed by @Cornell in collaboration w/ @landcarbonlab show where crop emissions are highest. So which countries have the highest crop emissions? Six countries account for 61% of all crop emissions: Brazil, China, India, Indonesia, Thailand and the United States. But the reasons differ and high emissions do not automatically mean a system is inefficient. Learn why: go.wri.org/crops-emission…
World Resources Institute tweet media
English
3
76
187
8.2K
Arjen Vrielink retweetledi
AI at Meta
AI at Meta@AIatMeta·
We’re announcing Canopy Height Maps v2 (CHMv2), an open source model for high-resolution global forest canopy mapping, developed in partnership with the @WorldResources. CHMv2 leverages our DINOv3 Sat-L vision model, specifically optimized for satellite imagery, to deliver substantial improvements in accuracy, detail, and global consistency. 🔗 Learn more: go.meta.me/70d2e9
English
39
90
656
60.5K
Arjen Vrielink
Arjen Vrielink@arjenvrielink·
The AI opportunity that everyone overlooks: Brain rot is what happens to humans fed low-quality content. Data rot is what happens to AI fed itself. The Jevons paradox guarantees we'll produce more data, not better data. The next moat isn't a better algorithm, it's curation. 2/3
English
1
0
0
31
Arjen Vrielink
Arjen Vrielink@arjenvrielink·
Humans change slow. The machine changes fast. The machine has no interior — it cannot grieve, hesitate, or doubt. That gap is not a problem to solve. It's territory to occupy. We need poets more than we need engineers. 1/3
Arjen Vrielink tweet media
English
1
0
0
40
Arjen Vrielink retweetledi
World Resources Institute
World Resources Institute@WorldResources·
If you’re restoring forests, the question isn’t just: “Did trees get planted?” It’s also: “Are they growing into healthy forests?” That is what canopy height can help tell you. Today, @landcarbonlab is sharing Version 2 of Global Tree Canopy Height, the 1-meter resolution global map that can detect individual trees, now upgraded for significantly better performance in tall and complex forests. Accuracy improved by about 60% in independent validation (R² 0.53 → 0.86), which sharpens how this data supports: 🌿 restoration progress tracking over time 🌿 carbon storage and removal estimates grounded in forest structure 🌿 biodiversity monitoring that depends on habitat complexity Built with AI at Meta using the open-source DINOv3 model, supported by the @BezosEarthFund, and openly available on AWS and Google Earth Engine. Learn more: go.wri.org/4m4pHH Read the pre-print: go.wri.org/CgWEi5
English
5
64
196
8.3K
Arjen Vrielink retweetledi
Kyle Walker
Kyle Walker@kyle_e_walker·
Announcing {freestiler}: a high-performance vector tiling engine for R and Python. Generate vector tiles for your maps directly from R/Python spatial data, @duckdb queries, and local spatial files. Check out the docs here: walker-data.com/freestiler Some highlights:
Kyle Walker tweet mediaKyle Walker tweet mediaKyle Walker tweet media
English
8
68
695
24.3K
Arjen Vrielink retweetledi
Google Earth
Google Earth@googleearth·
We are thrilled to announce that Google’s Satellite Embedding dataset, powered by @GoogleDeepMind's AlphaEarth Foundations model, has been updated for 2025. This additional year of coverage now unlocks the ability to look back, compare, and detect change across the planet with unprecedented clarity. Learn more here ➡️ goo.gle/3NdIxWn Part of Google's Earth AI, the new data represents the state of the planet throughout 2025, distilling petabytes of multi-sensor data into a 64-dimensional embedding for every 10 meter pixel. What’s new in this update? 🧵👇 - 🌍 2025 Data: The state of the planet throughout 2025 is now available on the Earth Engine Data Catalog and Google Cloud Storage. - 🔬 Unprecedented Change Detection: Because these embeddings capture subtle spectral, spatial and temporal signatures, they make it easy to spot significant year-over-year changes without the heavy lifting of raw image processing. - 💚 Long-term Commitment: We are formalizing our commitment to the ongoing production of these annual layers to support your operational workflows. Since we first launched the Satellite Embedding dataset, we’ve been inspired by how our community is putting this data to work. Applications are ranging from ecosystem mapping and agricultural crop-typing to carbon stock prediction. We can’t wait to see what you do next. #EarthEngine #GeoAI #DeepMind
Google Earth tweet mediaGoogle Earth tweet mediaGoogle Earth tweet media
English
48
392
3.1K
209K
Arjen Vrielink
Arjen Vrielink@arjenvrielink·
@MachineMusic81 Arnaut Pavle rips, fucks, rocks, is 🔥. NOQA. Even the name. Or is that because it sounds hard in Dutch? Or is it just me?
English
1
0
1
10
Arjen Vrielink retweetledi
Suzie Dawson
Suzie Dawson@Suzi3D·
I'm going to keep saying it until people get it. AI *is* mass surveillance. All AI. Not some. All. By design and by default. x.com/heygurisingh/s…
Guri Singh@heygurisingh

🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.

English
27
3.3K
8.3K
153.1K
Arjen Vrielink retweetledi
Chidanand Tripathi
Chidanand Tripathi@thetripathi58·
I mentioned a random brand of dog food in a private conversation. 10 minutes later, I had an ad for it. It’s not a coincidence, and it’s not magic. Your phone is "shadow-logging" your life through 5 specific settings you’ve likely never touched. Here is how to stop the eavesdropping:
English
76
1K
6K
2.1M