The one I use is for the phone. I don’t always use the paid version, depends what I have going on throughout the month. On the phone you can use several different models as they are integrated into the app. I don’t specifically use one over the others. My go to is 4o. Even the paid version has limitations on how much you can interact before you have to choose another model.
For the phone it’s about $20 a month. Unless i think it’s going to be a crazy month where I think I may need it a lot I use the free version, which in most cases is fine for basic things.
If you have a decent computer, you can install Ollama for free and use, say, the Llama 3.2 8B large language model for a text-based AI. That’s what I use. It’s pretty quick with my Radeon RX 7800 XT GPU, though it would be much faster with a modern Nvidia GPU. The only thing that sucks about local AI is that its information is only up to date from its training period, which I believe for Llama 3.2 is December 2023. If you want something for generative image creation, free models exist also.
Thanks for the tip. I’ll try the free version to get an idea of its limitations.
I infer that Llama is a front end that converts the user’s natural-language text to input for Ollama and then reverses the process for Ollma’s response(?)
I think all AI services have the same limitation, whether the code runs on your machine or elsewhere.
My intended application is researching scientific and engineering questions. Is there an AI service that’s regarded as “best” for that purpose?
I’d recommend buying the first month and exploiting everything you can and asking it to prove its results. And then questioning and penetrating those answers. Make it prove itself. I often prove it wrong(or inaccurate towards the particular/specific (specificity matters) subject) and then we have something else to talk about. After that you are limited to responses before it switches models. Some models are good for math, images, calculations.
I have improved my knowledge substantially in a short period of time by challenging AI and going back and forth.
Ollama is a tool that allows you to run large language models (LLMs) on your own computer. It is designed to be easy to use and install, and it supports a variety of different models, including Mistral and Llama 2,3 etc. Ollama is a great option for people who want to experiment with LLMs or who need to run them in a private environment.
As to which LLM for your intended purpose. There are several.
Open-Source LLMs:
LLaMA: This model is known for its strong performance on a variety of tasks, including scientific and technical text understanding. It’s a good choice if you want to run an LLM locally and have control over the model’s training data.
Falcon LLM: This model is also known for its strong performance on scientific and technical text understanding. It is particularly good at summarizing scientific papers and generating code.
Commercial LLMs:
ChatGPT: This model is widely used for a variety of tasks, including scientific and engineering research. It is particularly good at generating human-quality text and answering complex questions.
Bard: This model is also well-suited for scientific and engineering research. It is particularly good at summarizing scientific papers and generating code.
(This entire post was generated by AI. Including the picture, bored!)
Makes sense. It’s like having a good tutor on whatever subject you fancy at the moment.
I gave your post a “like,” but it really belongs to your AI.
Seriously, though, that was a useful summary and intro to available AI, wherever it came from – thank you. I supplemented it with a Wikipedia trip.
I feel like I’ve been living under a rock in this area. I hadn’t realized how much knowledge is integrated with the language model in contemporary AI. That model processes user input in the same way as any other text – there’s no need for special handling.
More to the point, there are many possibilities I should try, and there’s no reason to think one will be best for everything.
ChatGPT made this pfp for me. I think it uses DALL-E 3 LLM for the text to picture stuff. I do have ollama with llama3 locally, and as someone else mentioned the weakness is if you need current information. I just have an NVIDIA GTX 1060 but it works okay.
Anyway, I was pretty impressed ChatGPT (DALL-E 3) was able to generate such a nice image for me from two fairly simple prompts. It also converted it from webp to jpeg when I asked it to.