53
submitted 7 months ago by reddthat@reddthat.com to c/memes@reddthat.com
you are viewing a single comment's thread
view the rest of the comments
[-] TootSweet@lemmy.world 3 points 7 months ago

The really short version is that the first solution (with the for loop) can take "a long(er) time" depending on the values of x and y.

Say x is negative one million and y is positive one million. If the loop takes, say, a microsecond per iteration, the result would take roughly 2 seconds.

The second option is one decrement operation, one increment operation, two multiplication operations, one subtraction operation, and one division operation. No matter the values of x and y, the latter option will always take exactly the same amount of time.

There's a concept of "Big-O notation". If the amount of time an algorithm takes increases linearly as some value in the problem (in this case, the value of y minus x), we say that's a "O(n)" runtime. That's the case for the first solution I posted above. If the algorithm takes the same amount of time no matter what parameters are given, we say it's "O(1)". That's the case for the second option above. (If it takes the square of some parameter, we say it's "O(n^2)". Exponential? "O(2^n)". Etc.)

(And, I'm handwaving quite a bit of rigor and details here. But hopefully you get the basic idea.)

99% of the time, if you've got a choice between a O(n) algorithm and an O(1) algorithm, you're better off to go with the O(1) algorithm. It is entirely possible that an O(n) algorithm could run faster than O(1) in some special cases, but those special cases are almost always when n is small and the runtime of either one is going to be negligible. If n is large, you can get some very large time savings.

(And again, I'm leaving out details, but there's a lot online about Big-O notation or "asymptotic runtime bound".)

The different Big-O classifications tell you how "quickly" the runtime increases based on the parameter. O(n^n) grows quicker than O(2^n) which grows quicker than O(n^2) (or even O(n^c) where c is any constant) which grows quicker than O(n*log(n)) which grows quicker than O(n) which grows quicker than O(log(n)) which grows quicker than O(1). And in general, picking an option that's alter in this list is almost always better than picking one earlier.

Now, why the latter works. The sum of integers from 1 through n is the same as n*(n+1)/2. The sum of integers from x to y is just the sum of integers from 1 to y minus the sum of integers from 1 to x-1. y*(y+1)/2 - (x-1)*(x-1+1)/2 = y*(y+1)/2 - x*(x-1)/2 = (y*(y+1)-x*(x-1))/2.

Oh, and to your specific question, both of the solutions I posted in the meme assume that the question meant specifically all "integers" between x and y inclusive. So, no, doesn't have to do with whether we read "numbers" there to mean integers or something else. Theoretically, the two solutions I posted should always give the same answer (so long as y >= x, which is a given in the question; perhaps also so long as no integer overflow/underflow occurs... I haven't thought that one through fully yet. Heh.) The only reason the second is preferable is because it's "quicker" and its runtime doesn't depend on the value of the parameters. I guess one could also argue the latter, being a one-liner, is more terse, which may also be desirable.

[-] CaptSneeze@lemmy.world 2 points 7 months ago

Wow! Thanks for taking the time explain all of that. Makes complete sense.

this post was submitted on 14 Apr 2024
53 points (94.9% liked)

Memes @ Reddthat

917 readers
362 users here now

The Memes community. Where Memes matter the most.

We abide by Reddthat's Instance Rules & the Lemmy Code of Conduct. By interacting here you agree to these terms.

Rules

founded 1 year ago
MODERATORS