@freemo I want all the variants, promotion, no promotion, customized promotion, demotion(?), lateral motion(??). That's what c++ is usually about, every use case is respected and everything is optional on top of c.
@freemo undefined behavior is wonderful, beautiful and necessary and I love it, the promotion rules are broken, and the result happens to be devastating. If signed overflow was totally defined behavior, it would still be bad, even if not as devastating.
@freemo Right, comparison could be a good way to show it in C. Applied to your first example you can see the result will be less < 0, however that is a good case, no threat of overflow there and the safe numerics wrapper will do the same. Multiplication is the problem:
unsigned short a = 0xFFFF;
unsigned short b = 0xFFFF;
if(a * b < 0)
puts("oh no!");
Though since it's undefined behaviour you can't be sure in anything! Here is a more complete example to demonstrate that:
https://coliru.stacked-crooked.com/a/d91be42d25ddf8ff
The safe numeric wrapper will do the right thing here and promote to unsigned int, that can hold the result without overflow.
@freemo nope, your second example doesn't print what you say it does, and it couldn't ever print anything negative, cause you are casting the result to unsigned with printf %u. Replace it with %i and woah you proved the opposite.
Look up the rules, anything that can fit into signed int is promoted to singed int regardless of signedness, and regardless if the result of operation can fit. I cant be bothered to device an example in C where you can in some weird indirect way notice this due to sign extension, so here is a c++ example that demonstrates it directly:
template <typename> class show_type;
void f()
{
unsigned short a, b;
show_type<decltype(a*b)> x;
}
It should give you an error about incomplete type show_type<int>, the template parameter indicating the type of the result of the multiplication.
@freemo Oh no I made the confusion worse. I was indeed complaining about C. It's C's rule that two unsigned shorts promote to signed int. I agree with the library I linked, though would have been even better if it allowed more customization.
@freemo are you referring to the rule set in the link I provided? Should have been more clear that that's a c++ library that fixes the problem (that I "want!"). The language standard is that even two unsigned shorts will be promoted to signed int. The logic is as long as it fits in singed int it will be promoted to singed, and only if it doesn't would it be unsigned. Any unsigned short would fit in singed int, but their multiplication might not (at least in common architectures).
@freemo why unsigned to signed though? Usual justification for promotion is that it costs nothing and helps avoid overflow, but in case of unsigned short to signed int, multiplication is still likely to overflow. To make matters worse singed overflow is undefined behavior -_-
@iron_bug yes It's c++ that fixes c degeneracy that I mentioned :P
Promoting unsigned short to signed int is absolutely insane, cause multiplication might overflow it, which is: surprise surprise, undefined behavior in your face!
unsigned short promotes to SIGNED int?! why... why did you do this to us C ?!
want!
@ravenclaw I say go forth and preach :V
young... father... pope?
@Kovaelin
You must instead join the church of emacs. The easiest path is catholic spacemacs.
Even blasVIMy is preferable to absolutely proprietary halfwitiJ.
@ravenclaw preach :V
@choutos
@ravenclaw not much to do other than being aware, I guess. Also from time to time "yelling" at people who try to assert their religion disguised as science.
@choutos
@ravenclaw I think it very much such an illusion. Not exactly, but related. You look at something and feel it's impossible for this to be random, something must have caused it, "aha I know! Zeus must have thrown that lighting bolt!". The same feeling makes you jump at correlations and interpret them as causality. It's a general desire to know everything, and be in control of everything.
You first must form a coherent, rigorous theory that is not open to interpretation. If someone says it is, you must fight them viciously (through elaborations) to make sure it isn't. This theory must also fully cover reasonable part of reality and not be too narrow, to be useful. This first step alone is quite difficult. Then you must make sure every prior even tangentially related experiment does not refute it. Then you must device various repeatable experiments that confirm it, in all of its aspects. If any aspect is not covered that aspect should be removed from the final/definitive version.
Not everything they tell you in school has gone through such a process, because schools are meant to familiarize yourself with society and culture and pique your interest, not tell you all the undeniable truths (and it's also terrible at what its doing). The pedantry usually starts at university level, but it is fundamental and necessary.
@fikran Well, I don't have all the details committed to memory, I usually just do a quick search when I have specific question, usually forgetting to look in the most obvious place:
https://www.gnu.org/licenses/quick-guide-gplv3.en.html
@fikran CC licenses are not for software, so they can be misinterpreted and manipulated in various very lawyery handwavey ways when applied to software, mainly because of the subtlety of source-binary difference, but also patent law. They are also very different themselves. CC-BY is just attribution like MIT or BSD, CC-BY-SA is strong copyleft like GPL, CC-BY(-SA)-NC is weird "don't make money from this cause if there is a way to make money I want it all for myself??" kind of license.
GPL is more specific and covers software specific nuances better. GPL2 is outdated and shouldn't be used, it has loopholes and ergonomics problems and it's not compatible with GPL3 which fixed them. The only reason to use GPL2 and it's biggest loophole is to allow so called tivoization, locking down hardware, aka the reason you can't root your android device or update its kernel/OS.
@freemo You can add a specific language context you are most familiar with. I'm curious about any perspective, even if it's specific.
I wield the c++ chrono library and my ambition is to forge the timer to rule them all and in the darkness bind them *evil laugh*. I don't want people writing their own timers, what is this frivolity?!
I'm not sure what creation of timers have to do with the zero guard. Timer being just a number or two, creation cost is negligible, and the division does not occur during creation, but when checking the progress ratio, which can happen quite often. Say, in context of animation that can be every screen refresh, and there can be a lot of timers involved, as their are used for "inbetweening" (go from A to B in X time). In a more general sense though, the ratio calculation is just a division, usually one instruction, and adding the guard can add a significant relative overhead (I don't know 30-100% maybe). It might not matter, but I want to cover the cases where it matters as well.
The default initialization is usually considered a good thing if it's meaningful, since it's what the basic value types do. As in "here is my super simple type, if you can handle an int, you can hand it as well". Especially if you allow initialization to any value and the value type has a common sense default, preventing default initialization is kind of wonky. There is very little difference between
Type(),
and
Type(0),
except that the latter requires the literal 0, which in context of time durations is a bit problematic, since it needs units. Even dealing with arithmetic types in generic code people will often prefer the default initialization to a literal 0, since the type in question might not want to be in general constructable from a single integer (2D vector that requires 2 integers, but also defaults to natural [0,0]) or the value corresponding to integer zero might not be the best default (additive identity for floats is -0).
A special "0 timer" type can't really solve the performance issue for a value not knows at compile time, since it would require some sort of dynamic polymorphism, that would likely generate a worse check and a worse branch that the simple guard would. For values knows at compile time, the compiler will like optimize the guard away, so it's really all about the runtime requirements.
re: C++ & Rust
@amiloradovsky@functional.cafe Well I guess we mean different things by type system then. I never said it borrow checker came from nowhere, I simply said it's not universal like a type system that C++ has. I wouldn't want lifetimes in c++2x, instead - new primitives that will allow a library implementation. As far as I know the missing link is a compile time stateful counter. Then later maybe some syntax sugar, for patterns that prove to be most useful.
I looked up linear type systems and the first "not-quite" example is c++'s unique_ptr, a standard library class/type. There should just be another quite the example "owner_ptr" type, that would also optionally cover all the other variations as well ("Ordered", "Affine", "Relevant"), with what in the context of the language are called type parameters.
Whatever language it is in a few decades it will have unwanted stale features. C++ is still alive and kicking because almost all of it's features were and are optional.
Should 0 total duration be a valid state for a timer?
It's a sentinel value that might take care of some edge cases, but then have to guard against division by 0 when calculating progress ratio (elapsed/total). It's the only sensible default initialization value, but it also doesn't really makes sense conceptually - it's done/triggered before it's even started.