Showing posts with label hallucination. Show all posts
Showing posts with label hallucination. Show all posts

Reading Notes #668

This week covers Microsoft’s open-source Agent Framework for agentic AI, prompt-injection risks and mitigations, and the causes of language model hallucinations. It also highlights NuGet package security updates, Azure SQL Data API Builder improvements, Reka’s new Parallel Thinking feature, and the latest in AI benchmarking.


Cloud

Programming

AI

Databases

Miscellaneous


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!

~frank