Why can calculators compute better than computers?

I’ve always known that when you are programming anything that uses floating-point arithmetic, there is the possibility that the output will not be remotely similar to the desired output.

However, calculators are able to do such calculations quickly and accurately.

Why can calculators do these tasks so well but not programming languages/personal computers?
And why can’t we just implement the methods calculators use to compute these answers in computers.

First of all, you are not correct, calculators are limited too, they do nod calculate accurately, but because calculators are intended to be used only for calculations (while computers are not) they use bigger limit, but it depends on calculator type. F.e. my graphical calculator is limited to use integers to calculate 299! and also 499! as decimals, standard calculator can calculate something like 9! in integers (and now I’m not sure what is the limit for decimals, 49! ?) with floating points it is principialy the same.

1 Like

Humans were trying to make calculator only at the first place…while in middle of inventions they also saw a lot more potential just than to calculate…

I fully agree with betlista…Scientific calci have range 69!~~10^99 only and they also approximate…Unless it’s very important we lyk to aprrox…(i.e we lazy :stuck_out_tongue: and aren’t on moon mission,even there NASA approximated but lessened their inaccuracy.)

As a perpetual m/c is impossible…so in maths its smtyms impossible(either our maths is not so developed yet) to calculate precisely…then we are happy in approx… :slight_smile:

Why don’t they use the same limit for computers? Is there any harm in having a bigger limit?