20
submitted 10 months ago* (last edited 10 months ago) by noneabove1182@sh.itjust.works to c/localllama@sh.itjust.works
top 7 comments
sorted by: hot top controversial new old
[-] Hanabie@sh.itjust.works 4 points 10 months ago

What's the context window size?

[-] noneabove1182@sh.itjust.works 2 points 10 months ago

according to the config it looks like it's only 4096, and they specify in the arxiv that they kept the training data under that value so it must be 4096.. i'm sure people will expand it soon like they have with others

[-] Hanabie@sh.itjust.works 2 points 10 months ago

I'll wait a little then I guess. I need 16k for what I'm doing. Thanks for the heads up 🙂

[-] noneabove1182@sh.itjust.works 2 points 9 months ago
[-] Hanabie@sh.itjust.works 2 points 9 months ago

Nice, will give it a try. Thank you 😊

[-] pennomi@lemmy.world 3 points 10 months ago

How does it compare to Mistral? That’s the best performing 7b model and it’s suspiciously missing from this report.

[-] noneabove1182@sh.itjust.works 2 points 10 months ago

I'm looking forward to trying it today, I think this might make a good RAG model based on the orca 2 paper, but testing will be needed

this post was submitted on 21 Nov 2023
20 points (100.0% liked)

LocalLLaMA

2203 readers
22 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS