We already have quite good methods for compression, both lossy and lossless. Any "AI" method would have to reliably beat the benchmarks set by those. Seems like that hasn't happened yet, though there definitely is research in that direction.
AI
Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.
because AI can't help but mess with shit. I've tried giving stuff like gpt an image, and telling it to do nothing to the image and give it back. poof.. it needed to add crap or change things for no reason.
tools to compress music already exist. use those, they work.
Data compression pretty much already works like this. The most likely reason is that data compression given current encoding paradigms is already about as good as it can be. The kinds of parameter notations you speak of would probably not result in a greater compression than existing algorithms
It depends on why you wanna do it.
Because smaller files are easier to handle and send? Sure, but that means lossy compression. Fundamentally auto encoders kind of do what you are saying but it turns out pure compression using AI has not been that useful recently, so they get used in a lot of other ways, for example encoding and decoding from and to latent space.
Or do you wanna do it so AI can make awesome music if you give it just some melody? Currently there are other ways to do it. Essentially there are ways to incorporate this functionality directly into music generation models, which is what some AI models like those from suno AI are doing already (afaik).
TL;DR because every solution needs a problem, and what you are describing hasn't been a big enough issue / priority to implement.