Model to run on 2-4 GBs of RAM?

Yeah, shooting the moon xD

I have a very limited use case - 1-2 emails per day, the text of which needs to be analyzed for a potential event to add to calendar (.ics). Easy for ChatGPT, but I thought that it might just be possible to do it locally on my shared Proxmox server.

I can devote 2 GB, perhaps up to 4 GB of RAM to this, and obviously processing speed is not much of an issue (if it takes half an hour, so be it) - but is there any model that will run on such RAM limitation?