> floats, and doubles. What else is there? > > Fixed point is a "lossless" way to represent decimal fractions. Standard > floating-point uses binary internally, and so "0.01" can't actually be > represented properly (just as 1/3 can't be represented completely in > decimal -- 0.33333...). Fixed point is generally implemented by using > integers and keeping track of where a decimal point ought to be. (Addition > is straightforward, multiplication/division gets an extra step.) > Fixed point is _not_ lossless. It is exactly what the name is implies, a floating point format where the decimal point is in a fixed location. I am very that there will be some "loss" when representing the number 1/3. You either have to store it as a fraction or there is loss ;-) There are some C math libraries that support arbitrary precision. Programming languages are tools. You should use the "best" tool for the job. There are a lot of factors that determine what the "best" tool is. Two of the big constraining factors are schedule and what you already know. As far as programming mistakes, I would agree that _some_ mistakes are easier to do in C. There are tools available to help catch a lot of mistakes. If you are careful and take advantage of the tools you have available it is possible to write a good C program. It has been my experience that the same is true for most (maybe all) programming languages. If you are not careful there will be bugs in the program. If the space shuttle crashes on my head do it care whether it was an array boundary over run in C or that a Java programmer forgot to set his references to objects to NULL when he was done with them and ran out of memory? Either way I am having a bad day ;-) The point is use good programming practices and the language that makes the most sense for your project and target platform. To a large extent this will be based on your experience. The answer to the question could and possible should change as you gain experience.