this post was submitted on 27 Apr 2025
8 points (72.2% liked)

Qwen

50 readers
1 users here now

A community all about the Qwens! (LLMs, VLMs, WANs...)

Here their blog page and their free chat interface

Post are allowed to have any format.

It is advised to put "Qwen" into the title somewhere.

Da Rules

  1. please be nice <3 ๐Ÿงธ
  2. no bigotry or general evil-doings please! ๐Ÿ’–
  3. no politics ๐ŸŒโŒ
  4. please don't make me add more rules <3

founded 2 weeks ago
MODERATORS
 

i jus wanted to get dis outta my system >v< ...

i dun like those boring linear model structures... they work... bt they dun look fun, nor intuitive. they jus produce output... which is boring!

pls, if some researcher with lotsa gpus sees this, maybsies try this kinda architecture... u dont evn have to credit me, just try it out n see where it goes ~ ~ ~

you are viewing a single comment's thread
view the rest of the comments
[โ€“] pixxelkick@lemmy.world 4 points 1 week ago

Afaik all LLMs have very derp recurrance, as that's what provides their context window size.

The more recurrant params they have, the more context window they can store.