Chris Anderson
10.2K posts

Chris Anderson
@crandycodes
Chris Anderson (he/him) - Current: Microsoft Azure Databases - Prev: AWS DynamoDB, Microsoft Azure (Cosmos DB, Bot Framework, Functions, Web, Mobile, SQL)
Indy Metro Area, IN Inscrit le Nisan 2013
1.4K Abonnements5.1K Abonnés
Chris Anderson retweeté

GraphQL aggregations are finally here in @Microsoft Fabric.
Query smarter, not harder.
Group and summarize data natively without the usual workarounds.
English

Make sure you check out Cosmos DB Conf happening now! Lots of the engineering team is also in the chat to answer any questions you might have. youtube.com/live/qXSur9LIf…

YouTube
English
Chris Anderson retweeté

Really cool case study showing how Toyota is using Azure Durable Functions and Cosmos DB to build multi-agent AI systems to enhance vehicle development productivity!
devblogs.microsoft.com/cosmosdb/toyot…

English
Chris Anderson retweeté

🚀 Exciting news! Dynamic Scaling is now GA in #AzureCosmosDB! 🎉 Save up to 70% on autoscale costs for non-uniform workloads with independent scaling per region and partition.
Learn more: devblogs.microsoft.com/cosmosdb/annou…

English

@freshperspected I'm lucky enough to have worked with some of the best people in MSFT and AWS. Even in MSFT, the teams I've worked on can be quite different from one another. I think high functioning teams will often look different because they are going to optimize for their specific problems.
English

@crandycodes 👏🙌
I hope you’ll be inspired to write about your impressions as you naturally contrast with your AWS and previous Azure experiences
English

@crandycodes Congrats! We've been evaluating using Azure Databases for postgres to migrate some of our data over. Would love to connect!!!!
English

@terrajobst haha that is accurate. I debated rejoin vs join a bit.
English

@ben11kehoe The long boomerang. I'll go back to Functions, and then end my career in SQL Server.
English

@crandycodes The ol' boomerang! Now to convince you to work on Azure Functions again...
English

@jeremy_daly Good excuse to get a new hammer AND a new pry bar.
English

@houlihan_rick @MongoDB These days, it's far simpler to use an export as the source for reads. Plus, you can use incremental export to get any changes since you started the export, making finding other changes easier. Generally more cost efficient than hyper-scaling, depending on the scenario.
English

Ran into a few DynamoDB customers lately concerned about the time it takes to scan tables/GSI's when they need to update schema. We always discuss how much easier this is with range updates in @MongoDB, but I also mention the customer guidance I gave back at AWS which is to provision all tables/GSI's at high throughput then dial back after they go active.
DynamoDB will split tables as needed across physical partitions to meet provisioned throughput requirements. When throughput is dialed back those partitions will remain. The only time adding partitions happens fast is when a table/GSI is first provisioned or provisioned throughput is very low. The process of adding new partitions to increase throughput for high capacity tables can take hours.
Provisioned throughput tables start with 1 partition by default, while on demand tables get 4. Partitions will deliver 3K RCU and 1K WCU each. If you need more throughput than those partition counts can deliver the table will start splitting. A split will double the partition count and corresponding throughput of the table. If more capacity is needed then the table will continue to split until there are enough partitions to deliver the desired throughput. Splitting the table takes longer as the number of partitions increase, and each split must complete before the next can begin.
To avoid this potential problem, I used to advise customers to grossly overprovision capacity when creating a new table/GSI then dial back allocations or switch to on demand after the table/GSI reports active. Doing this will cause DynamoDB to immediately provision enough partitions to handle the initial request. When the capacity is dialed back the physical partitions will remain, allowing provisioned throughput or on demand consumption to scale gracefully in the future up to the initial provisioned throughput levels without requiring table splits.
Applying schema updates to large DynamoDB tables will always be painful as long as updates can only be applied to a single Item at a time. This technique, however, can be very useful if you need to scan a large table/GSI quickly to apply those updates.
English

@richdevelops Yeah, in your case, it probably makes sense to just rotate the account. I usually create a burner anytime I'm going to be presenting publicly. Helps to also avoid leaking other info like non-public features/etc.
English
Chris Anderson retweeté

🚀 🚀 Amazon DynamoDB now supports AWS PrivateLink! You can now simplify private network connectivity between your Amazon Virtual Private Clouds (VPCs), DynamoDB, and your on-premises networks. Learn more 👉 go.aws/4cjBUtt
English



