Firefox
The latest news and developments on Firefox and Mozilla, a global non-profit that strives to promote openness, innovation and opportunity on the web.
You can subscribe to this community from any Kbin or Lemmy instance:
Related
- Firefox Customs: !FirefoxCSS@fedia.io
- Thunderbird: !Thunderbird@fedia.io
Rules
While we are not an official Mozilla community, we have adopted the Mozilla Community Participation Guidelines as far as it can be applied to a bin.
Rules
-
Always be civil and respectful
Don't be toxic, hostile, or a troll, especially towards Mozilla employees. This includes gratuitous use of profanity. -
Don't be a bigot
No form of bigotry will be tolerated. -
Don't post security compromising suggestions
If you do, include an obvious and clear warning. -
Don't post conspiracy theories
Especially ones about nefarious intentions or funding. If you're concerned: Ask. Please don’t fuel conspiracy thinking here. Don’t try to spread FUD, especially against reliable privacy-enhancing software. Extraordinary claims require extraordinary evidence. Show credible sources. -
Don't accuse others of shilling
Send honest concerns to the moderators and/or admins, and we will investigate. -
Do not remove your help posts after they receive replies
Half the point of asking questions in a public sub is so that everyone can benefit from the answers—which is impossible if you go deleting everything behind yourself once you've gotten yours.
view the rest of the comments
Wasn't it revealed that Microsoft was training their Copilot on Github repositories, including private ones such as paying coorporations believing their source code to be safe and secure, resulting in secrets suddenly being made semi-public?
I feel that there were other incidents too, though I can't remember them off the top of my head. Definitely not a place I'd recommend anyone to keep anything they love, even if they keep to best practices and don't store secrets in their repositories.
It was an open source game with open source mods. It wouldn't have made sense to have private repos.
I did a little Googling and Microsoft denies using private repositories for training. Do you have a source?
The claim above was off the top of my head, but I've found multiple pages of results describing the panic that ensued.
Now, Microsoft (Copilot and Github) are less than clear on what exactly is used for training, but the general consensus seems to be, that they don't train on private repositories. Though there appears to be some confusion about this, especially regarding Microsoft's honesty about not using loopholes (this article might be faked, I haven't tried confirming it, though, this topic is a shit show ripe with miscommunication, misinformation, and quite a lot of confusion and fear regardless).
It appears that the specific issue I was referring to required a human error for copilot being able to train on the private repositories. Namely, some unfortunate fool temporarily making the repository public (in which case it obviously isn't private anymore, and therefore free for grabs by scrapers). Usually this wouldn't be a problem, since no indexer or scraper can check all of Github all at once all the time, so the chance of a briefly exposed repository being cached is rather small, albeit always there.
That said, Copilot, Bing, and Github are likely better integrated than Bing simply wasting resources on continuously scraping Github for new repositories. I personally imagine that Github saving resources by sending a signal to Bing when a repository is made public isn't entirely unlikely (that's something I might do, harboring no ill intentions), meaning that it is possible (though in no way confirmed) that Bing punishes briefly exposed Github repositories instantly by forever caching them.
Is this 100% Microsoft being predatory? No, obviously not, since it requires a user error to happen in the first place, and since Copilot is technically only trained on public or exposed data. Though, Microsoft learning about this rather scammy behavior and simply classifying it a "low-impact-severity" and disabling the Bing cache for humans (but apparently not Copilot) doesn't sit right with me. I'm sure that they knew exactly which kind of data they were working with during dataset sanitation, so they could have chosen not to use sensitive data or at least inform exposed clients that they are adding their cached secrets to Copilot.