Knowing isn’t everything. I feel like there are endless accounts where experience overwrites the ability of someone to predict and outcome based on the knowledge available. Take for example creating music. There are rules for composing pieces that will sound pleasing to the human ear; no parallel fifths or following parts. Yet there are examples in famous pieces that want to accomplish a certain tone or sound that break these rules.
What Google is trying to do, to me, sounds crazy. It’s great to have a search engine to find what you want, but I don’t want to be told what I need to find. That kind of artificial intelligence takes away the struggle of being human. It’s interesting, b/c the goal of engineers is to provide services – whether it be devices, roads, buildings, products – to the public to make their life safer, more efficient, or easier. But there has to be some lines drawn. Sometimes, it better to take the back roads. Maybe you want to do something the old way. Or the way your grandpa did it? It seems to be anti-human to design a computer to be human for you.
But if you believe that Google is anti-human (of sorts, I don’t mean that they want to end humanity), and they are fundamentally scientific (which I’m not completely sure what that means), does that make science (and by extension engineer – the application of science) fundamentally anti-human? Where should the line be drawn between useful net benefit to society and artificial replacement of society. We’re already seeing a more detached culture with texting and emails and web-surfing…reading the article “Is google making me stupid” scares me a bit.