It's weirdness, but I can't imagine a program in any language I know
of actually having trouble parsing -0.0.
Aren't situations like this the reason why we have standards for the representation of floating point numbers? I seem to remember something like that.