Magic, which makes AI tools for coding and to automate software development tasks, raised $320M from Eric Schmidt and others, bringing its total raised to $465M
super proud of our team that made this possible. https://magic.dev/... Ben Bajarin / @benbajarin : I want all the tokens! Rowan Cheung / @rowancheung : Magic's new AI model has a 100M token context wi...
Apple's WWDC demos, with the integration of generative AI across its products, show the value of seeing AI as a feature rather than a standalone app or device
what's not to love? Michael J. Miller / PCMag : Apple Intelligence Satisfies Demand, But Doesn't Change the Game M.G. Siegler / Spyglass : “AI” is Apple's “Voldemort” Ben Thompson / Stratechery : [FRE...
A look at Daylight Computer's DC1, a 10.5-inch, $729 tablet that feels like a hybrid of an iPad and a Kindle and runs on a custom Android-based OS called SolOS
already compatible with #Obsidian and the new e-ink mode I created for it. X: Ian Harber / @ianharber : “It's a tool, not a master... it doesn't try to consume you... technology that feels a little bi...
How researchers used AI to read the Herculaneum papyri, charred in 79 AD by Mount Vesuvius' eruption, potentially rewriting key parts of ancient world history
Well, make a 3D CT scan, use the images to virtually unfold, recognize the ink drops, and off you go. Amazing work by three students (from Berlin, Nebraska and ETH Zurich). … X: Nat Friedman / @natfr...
Meta debuts Code Llama, which can generate code and debug human-written work, under the same community license as Llama 2, free for research and commercial use
Code Llama, Code Llama - Python, and Code Llama - Instruct, fine-tuned for understanding natural language instructions. … Yann LeCun / @yannlecun : You knew that was coming: Code LLama !!! - Llama-2 ...
OpenAI releases alignment research and an open-source tool that uses GPT-4 to try to interpret the behavior of individual GPT-2 “neurons and attention heads”
look, we can explain some neurons in GPT-2! That's so cool! Another way to read it: we can explain 0.3% of neurons in GPT-2, which is 0.00017% the size of GPT-4. So we *really* don't understand these ...