The extended version of saying Pets vs Cattle is something like this:
In the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line.
As quoted from Randy Bias’ 2016 article The History of Pets vs Cattle and How to Use the Analogy Properly, where he documented his own usage of the concept starting around 2011.
He goes on to define the two:
Servers or server pairs that are treated as indispensable or unique systems that can never be down. Typically they are manually built, managed, and “hand fed”. Examples include mainframes, solitary servers, HA loadbalancers/firewalls (active/active or active/passive), database systems designed as master/slave (active/passive), and so on.
Arrays of more than two servers, that are built using automated tools, and are designed for failure, where no one, two, or even three servers are irreplaceable. Typically, during failure events no human intervention is required as the array exhibits attributes of “routing around failures” by restarting failed servers or replicating data through strategies like triple replication or erasure coding. Examples include web server arrays, multi-master datastores such as Cassandra clusters, multiple racks of gear put together in clusters, and just about anything that is load-balanced and multi-master.
Ultimately, focusing on the disposability of servers—a concept that Google, in fact, pioneered—is the most important aspect of Pets vs. Cattle and losing that, focusing on some other aspect, or ascribing something not intended (e.g. stateful applications as pets) distracts and muddies the waters.