"...so that you always have the most current version!"
If you see this assertion with respect to a piece of software you're considering, my advice to you is to RUN!
Yes, there are market segments for which this is not an unreasonable practice. I've worked with software applications used to populate forms (which are then printed, signed, and filed... *sigh* A subject for another post...) that are based on legally mandated designs (think tax forms, real estate appraisal forms, disclosure forms, etc) which change somewhat frequently. In this market, it's totally reasonable to think that patches are going to have to be produced regularly. Ideally, the design of the application should be such that this involves the smallest code footprint possible and provides the least opportunity to introduce regression errors.
On the other hand, if your software vendor is touting frequent patches that add new features or fix defects as a benefit, then I'd argue you're seeing the output of shoddy software "engineering" practices and bad product management. This "patch early, patch often" attitude seems to be driven by a misguided attempt to be "support oriented" or "Customer driven", but actually results in a worse experience for the Customer and increased expense for the vendor.
Defects and Quality Assurance
With respect to defects, I agree with using bug tracking systems, responding to Customer-reported issues, and ultimately releasing patches as necessary. I'm not in favor, however, of leaning on this strategy as an excuse to use your production users as unwitting beta testers, as this strategy seems to lead.
This practice has a negative impact on quality assurance. QA is often neglected in the software industry anyway, and the idea of issuing patches when issues arise, in lieu of having comprehensive testing and QA procedures, is seductive to management because it can further decrease the perceived "expense" of QA. Why spend time looking for defects internally when the Customers can report them? As you can well imagine, this also creates patches that are of equal or lesser "quality" to the original software and often introduces regression errors.
Coupling the scrimping on QA with the automatic delivery of patches creates the most terrifying scenario. I've been in the situation, all too frequently, of having to explain to one of my consulting Customers that a software update that was automatically applied (without their express consent) actually created an issue due to a vendor-induced regression error.
IT Support Hell
Even when a patch isn't automatically applied, though, applying patches from a vendor that has a history of "poison pill" patches is a game of Russian roulette for your IT staff.
Suppose you spend the money for a lab environment to test the patch before deployment, and even more money to spend on humans to do the testing; unless you're sure that your lab accurately represents all aspects of the production environment, and the usage patterns of the production users, you're still playing with fire.
How many of us have WAN simulators in our labs, or employ testers who are as familiar with all the features of the application as the users who use the software every day? Most test labs I've seen involve a secondary installation of the application on an unrelated production server, the desktop PC's of the helpdesk or IT suport team, and the time necessary to install the patch and see if the application opens afterwards.
With the plethora of applications that even a small company employs combined with frequent operating system patches and the potential for unwanted "interaction" between disperate applications installed on the same PC, it's all but infeasible for small organizations to effectively test patches prior to deployment. The sheer number of hardware, operating system, and application software configurations is too great. Small companies aren't ever going to be able to fully test all possible scenarios, so it's better for them to choose software applictions that require less frequent patching and that come from vendors who ship software with fewer defects.
Implementing new features on the whim of a Customer, without undertaking any kind of formal requirements research and planning, is a setup for yet more patches to be issued when other Customers discover the new feature and find that it doesn't quite meet their needs. It would be better to put new feature requests into a requirements specification for a future version, rather than writing code based on ill conceived requirements. It seems like vendors see this as a benefit to the Customer, but I'd much rather have fewer patches to support at the risk of having to wait for features.
The "update frequently" attitude leads to substandard technical support, as well. Invariably, the support technicians are taught to ask the Customer for the version number of the application they're using, and to instruct them to download the latest build before giving the Customer a chance to give a description of the issue.
This is, I suppose, because the technical support management at these companies believes that "most" issues are solved already in the new build. Of course, without actually determining what issue is being reported and at least following-up to see if the new build resolved the issue, blindly directing the Customer to the most recent build doesn't do anything to improve the quality of your support metrics! (A feedback loop that appears to be missing the "loop" part, eh?)
It sounds silly if you actually think about it, but I've been directed on many occasions to "download the current version and call back if you're still having a problem". Coupled with the problem of low quality patches that contain regression errors, there is the serious potential to induce new issues when instructing a Customer to update their installation. It would seem to me that inducing additional issues when the Customer is already experiencing an issue would be a bad Customer service policy.
In the end, when I see a company touting frequent patches as a feature, my "gut" tells me that they have succumbed to the stuporous routine of patching problems as they arise, and not taking the time to develop the necessary internal practices and controls to stop poor quality software from ever shipping. I incorporate frequency of patches as a criteria when I evaluate software for my Customers, and an application that is patched very frequently receives low marks.
I encourage you to hold software vendors who treat you this way responsible. I would advise you to evaluate competing products that have less history of patching. Even if the competion lacks a few features or isn't as "nice", you'll more than make up for that in fewer patch-related headaches, and you'll be sending a clear message to the vendor that this kind of behavior won't be tolerated.