![]() The AID model is inherently extremely expensive, and tweaks like model sparsification / distillation can deliver only so much in the way of cost savings before the quality goes to hell. (How can a computer be a good D&D DM when it also takes 20 seconds to load a social media website, which then is broken?)įixing these AID problems are intrinsically difficult. It doesn’t show you what it can do one has to elicit by demanding cool stuff, of the sort most people quite understandably expect computers to be completely incapable of. Learning Curve: the default experience for AID seems to be to open it up, type in a few sentences like “Hello, I am a human”, decide it’s boring, and to quit.If one could, however, the results might be stunning. Few people have the patience, or funds, to do 20 samples per action/outcome, however, because of #1. Quality: models like GPT-3 can generate stunningly good output when cherry-picked at a level like ‘best of 20’ the other 19, however, range from ‘meh’ to ‘atrocious’.This has devastating consequences on what players can be allowed to do and how often, how hard they must be monetized, how one is beholden to APIs, centralization requiring/enabling censorship, etc. No matter what you do, this is always going to be expensive. Cost: every turn invokes a full-cost call to a NN model. ![]() AID ProblemsĪID and imitators are currently designed in the most straightforward way possible: a seed text, often unique or customized, and then text generation step by step after waiting for player text responses. And in practice, even the average experience tends to be compromised by current technical & economic realities-AID, as run by Nick Walton’s startup Latitude as of April–June 2021, has been experiencing problems relating to GPT-3 cost, and its use of the OpenAI API & OA’s mandatory moderation thereof. Of course, as current NNs have many limitations (they are far from human-level intelligence, lack any kind of memory, are unable to plan, are not trained to create high-quality fiction, have weak common sense, etc), AID and its imitators typically does not deliver such a peak experience on average. AID is one of the most interesting uses of the GPT-2 & GPT-3 neural network models computer games, forever frustrated by their inability to offer worlds as arbitrarily complex and realistic as a human Dungeon Master running Dungeons & Dragons games (which is effectively a collaborative fiction writing exercise), no matter how many verbs are added to the text adventure parser or how many 3D artists are hired, finally can handle anything a player can type in, and improvise with it.Īt its best, an AI Dungeon game like “My Musical Troupe of Orcs Uses Music to Advance Orc Rights” really feels like simulating an entire world. Revisiting AI Dungeon (AID) in the light of a year of GPT-3, I would like to propose a radical redesign of it based on the problems it has encountered. This trades storing kilobytes for running teraflops and so can dramatically reduce costs as players spend most of their time reading cached output (rarely needing nor wanting to generate brandnew output requiring a NN run), can increase quality as players collectively uprank actions/outcomes which are highest-quality, and caters to newbies who don’t understand the power of NN-backed text games and flail around. A useful variation on AI Dungeon-style (AID) text games would be to turn them into shared public game trees of pre-generated options which the player selects from, Choose-Your-Own-Adventure- book style.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |