Decent News for July

I Can Crash an O/S from the Browser
I was trying to write a useful web app that learns how much video memory is available on a device. This seemed pretty important, since Decent Apps that use LLMs are unlikely to be successful loading the models in some conditions. If we can detect these conditions, then the user can be spared five minutes of loading a big model to their device and learning it's a Big Fat No-Go for using the LLM in the app.
So I thought I had a pretty great video memory detection algorithm. I mean, I really worked hard on it for about a month, and it had clever bits. But my code crashed two different people's machines - full hard reboots. Not good stuff!
I'll spare you the technical details, but the problem amounted to there being no guards on a browser implementation of an API for allocating memory. This, combined with an inability to simply ask via an API how much memory is available, meant that I could not learn the full amount of available memory on certain devices without risking O/S crashes.
This problem sent me back to the drawing board. But I came up with a good solution for Decent Apps. Each user has a new "LLM Max Size" setting that is used to warn them when they attempt to load a model exceeding that size.
The user can adjust "LLM Max Size" to whatever they like. It defaults to a conservative minimum. The number can be automatically adjusted based on the Decent App user's history of loading models.
Create-Decent-App 2.2.0 Released
Create-Decent-App (CDA) is a project source code generator for Decent Apps. You just run `npx create-decent-app` from the command line, and you'll have a local-LLM-based web app with various capabilities in 30 seconds. You don't need LLM hosting for this - your users' devices are the hosts.
New features in CDA 2.2.0:
- On-device logging - app code can call
log()
and a message will be stored in browser persistent storage. It's never sent to a server (that would be indecent). But there's UI to copy it to a clipboard. And from there, the user can explicitly use/send the data how they like. Maybe they paste it into a Github issue to help some confused developer to stop crashing their operating system. - Predicting model loading success - based on device capabilities, model specs, and past loading success on the device, the UI will warn the user of potential problems. And it gets smarter about what models will work on a device as history is available.
- App settings - the app developer can create a data structure describing what settings for their app they'd like to track, and the settings UI will be creating automatically, including loading/saving settings in persistent browser storage.
- LLM and Logging Settings - user can manage settings related to these, e.g. disable logging or set the limit on how large of a model they want to try loading.,
- Toasts - a simple, but useful UI element. Just call a function like
infoToast("some message")
even way down in the bowels of non-UI code, and you'll get a text message popping up on the screen for a moment.
I should be clear, because I mention data collection/tracking in the above features...
All of this is implemented with local-only, browser-stored data. The point of Decent Apps is to not send your data to a server. We don't protect your data with a reassuring "Privacy Policy" - we protect your data by not having it.
Come to my house and attack me with a pressure washer if I ever screw this up.
-Erik Hermansen
Founder, Decent Apps