Either way, the fact is that Siri started out on a high, but things declined fast. People liked it at first, claiming it could be the future of computing, but after it was actually put to use for a few weeks by those who have a 4S, the realities of it all has become clear.
And that is basically the problem with fads. Particularly to do with technology; a branch of our daily lives that has unfortunately fell a victim to the most asinine coverage.
Siri, per se, isn't a bad technology. It's a cool technology. However many tech news websites and personal blogs fail to make it clear this is also a showcase technology that is far from being any good for any real daily use, much less a perfected technology. They just hail these type of technologies as breakthroughs when in fact they have existed for decades. Just in a form that many of these people who write about things they have no clue about, never heard before.
The problem with Siri (and speech recognition in general) is this; we got used to deterministic devices that answer correctly to our input. Be it a mouse, a keyboard or a gesture, we trust our devices to accurately interpret our commands and produce the expect output. Speech Recognition is far from having reached this level. And thus, very quickly, it reveals its weaknesses. In a day and age where fast and accurate human control is a staple of any device (from a PC to a smartphone), speech recognition is still, and just, a curiosity.
Apple might well perfect this in time, but I don't see that happening for a good while (Nuance has been on this for ages, after all, and its Dragon suite is still far from perfect).
Apple will perfect nothing.
Siri uses what is probably just a subset of Nuance's Dragon Express product for speech recognition. It then uses a second component known as Natural Language Processing, which essentially parses the recognized input and issues the proper commands to the device.
As an example, lets assume I talk to my iPhone and say "Send my calendar to Adam everyday at Eigth AM". More than just recognize the speech, NLP needs to interpret this command and with knowledge of the iPhone calendar, contacts and email applications, parse the command and issue the proper commands in the correct sequence.
Now, what really brings the technology forward isn't so much NLP. It too, but to a lesser extent. Our A.I. technologies are already very capable (although not yet easy to integrate on small handheld devices with limited specs). What really pushes the technology forward is Speech Recognition. This is the essential component that still requires quite a few decades of research so that we can have what I like to call Voice Control at the same level of a keyboard or mouse.
And Speech Recognition technology can only be developed by companies like Nuance and a few others (including some consortiums and non-profit organizations)