Sunday, 1 July 2007

Computer Intelligence

I use a lot of computers and build a lot of applications. There is an expectation that things will work as designed, and when they don't, or users complain that the system is not working as they think it should, I never doubt the application.

As we move towards the future, and computer intelligence becomes more and more advanced, there will be a point where computers can think for themselves. Whilst this will be a major breakthrough in technology, I can't help but wonder whether the trust in future systems will be compromised.

If a system has the ability to think for itself, would they be the equivalent of a human decision maker? Given the amount of 180 degree shifts in decisions that I have seen from business reps, business sponsors. and everyone else, will this become an issue? Even when the same data is presented to people, they can changed their minds. Will we allow the same kind of flexibility in our systems?

US has sent robots to Iraq to fight the war there. Whilst these are still human operated, there may be a time when the robot makes its own decisions. Will they make the same mistakes that humans will make, and shoot an innocent when they shouldn't?

2 comments:

Waz said...

Mate, there's thinking and then there is thinking. The age of thinking robots (IMO) is a long long long way off. The age of robots that can make limited decisions based on a finite set of parametres is where we're at..

Think about it - how would a machine think? We make decisions based on our life experiences. Without them, how would a computer make a decision? How would a business machine know what is good for a company, without knowing how the share market will react to each option? How the company's competitors would react? How the employees would react? Computers are big calculators. They do math. They do really complicated math, but it's still only maths and a long way from thinking...

JookBoy said...

That's true enough. But isn't there machines nowadays that learn from previous experiences?

Would having the same experience for different machines lead to different results (as is true for humans)