LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).
Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that's bullshit, because LLMs just aren't capable of doing any of these things in a meaningful way.
As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).