Jon's answer is great for approaching it from analogy. If a more concrete wording is useful for you, I can pitch in.
Let's start with a variable. A variable is a [named] thing which contains a value. For instance, defines a variable named x, which contains the integer 3. If I then follow it up with an assignment, , x now contains the integer 4. The key thing is that we didn't replace the variable. We don't have a new "variable x whose value is now 4," we merely replaced the value of x with a new value.
Now let's move to objects. Objects are useful because often you need one "thing" to be referenced from many places. For example, if you have a document open in an editor and want to send it to the printer, it'd be nice to only have one document, referenced both by the editor and the printer. That'd save you having to copy it more times than you might want.
However, because you don't want to copy it more than once, we can't just put an object in a variable. Variables hold onto a value, so if two variables held onto an object, they'd have to make two copies, one for each variable. References are the go-between that resolves this. References are small, easily copied values which can be stored in variables.
So, in code, when you type , the new operator creates a new Dog Object, and returns a Reference to that object, so that it can be assigned to a variable. The assignment then gives the value of a Reference to your newly created Object.
Google brought up a similar question with an answer that I think is very good. I've quoted it below.
There's another distinction lurking here that is explained in the Cook essay I linked.
Objects are not the only way to implement abstraction. Not everything is an object. Objects implement something which some people call procedural data abstraction. Abstract data types implement a different form of abstraction.
A key difference appears when you consider binary methods/functions. With procedural data abstraction (objects), you might write something like this for an Int set interface:
Now consider two implementations of IntSet, say one that's backed by lists and one that's backed by a more efficient binary tree structure:
Notice that unionWith must take an IntSet argument. Not the more specific type like ListIntSet or BSTIntSet. This means that the BSTIntSet implementation cannot assume that its input is a BSTIntSet and use that fact to give an efficient implementation. (It could use some run time type information to check it and use a more efficient algorithm if it is, but it still could be passed a ListIntSet and have to fall back to a less efficient algorithm).
Compare this to ADTs, where you may write something more like the following in a signature or header file:
We program against this interface. Notably, the type is left abstract. You don't get to know what it is. Then we have a BST implementation then provides a concrete type and operations:
Now union actually knows the concrete representations of both s1 and s2, so it can exploit this for an efficient implementation. We can also write a list backed implementation and choose to link with that instead.
I've written C(ish) syntax, but you should look at e.g. Standard ML for abstract data types done properly (where you can e.g. actually use more than one implementation of an ADT in the same program roughly by qualifying the types: BSTImpl.IntSetStruct and ListImpl.IntSetStruct, say)
The converse of this is that procedural data abstraction (objects) allow you to easily introduce new implementations that work with your old ones. e.g. you can write your own custom LoggingIntSet implementation, and union it with a BSTIntSet. But this is a trade-off: you lose informative types for binary methods! Often you end up having to expose more functionality and implementation details in your interface than you would with an ADT implementation. Now I feel like I'm just retyping the Cook essay, so really, read it!
I would like to add an example to this.
Cook suggests that an example of an abstract data type is a module in C. Indeed, modules in C involve information hiding, since there are public functions that are exported through a header file, and static (private) functions that don't. Additionally, often there are constructors (e.g. list_new()) and observers (e.g. list_getListHead()).
A key point of what makes, say, a list module called LIST_MODULE_SINGLY_LINKED an ADT is that the functions of the module (e.g. list_getListHead()) assume that the data being input has been created by the constructor of LIST_MODULE_SINGLY_LINKED, as opposed to any "equivalent" implementation of a list (e.g LIST_MODULE_DYNAMIC_ARRAY). This means that the functions of LIST_MODULE_SINGLY_LINKED can assume, in their implementation, a particular representation (e.g. a singly linked list).
LIST_MODULE_SINGLY_LINKED cannot inter-operate with LIST_MODULE_DYNAMIC_ARRAY because we can't feed data created, say with the constructor of LIST_MODULE_DYNAMIC_ARRAY, to the observer of LIST_MODULE_SINGLY_LINKED because LIST_MODULE_SINGLY_LINKED assumes a representation for a list (as opposed to an object, which only assumes a behaviour).
This is analogous to a way that two different groups from abstract algebra cannot interoperate (that is, you can't take the product of an element of one group with an element of another group). This is because groups assume the closure property of group (the product of elements in a group must be in the group). However, if we can prove that two different groups are in fact subgroups of another group G, then we can use the product of G to add two elements, one from each of the two groups.
Comparing the ADTs and objects
Cook ties the difference between ADTs and objects partially to the expression problem. Roughly speaking, ADTs are coupled with generic functions that are often implemented in functional programming languages, while objects are coupled with Java "objects" accessed through interfaces. For the purposes of this text, a generic function is a function that takes in some arguments ARGS and a type TYPE (pre-condition); based on TYPE it selects the appropriate function, and evaluates it with ARGS (post-condition). Both generic functions and objects implement polymorphism, but with generic functions, the programmer KNOWS which function will be executed by the generic function without looking at the code of the generic function. With objects on the other hand, the programmer does not know how the object will handle the arguments, unless the programmers looks at the code of the object.
Usually the expression problem is thought of in terms of "do I have lots of representations?" vs. "do I have lots of functions with few representation". In the first case one should organize code by representation (as is most common, especially in Java). In the second case one should organize code by functions (i.e. having a single generic function handle multiple representations).
If you organize your code by representation, then, if you want to add extra functionality, you are forced to add the functionality to every representation of the object; in this sense adding functionality is not "additive". If you organize your code by functionality, then, if you want to add an extra representation - you are forced to add the representation to every object; in this sense adding representations in not "additive".
Advantage of ADTs over objects
Adding functionality is additive
Possible to leverage knowledge of the representation of an ADT for performance, or to prove that the ADT will guarantee some postcondition given a precondition. This means that programming with ADTs is about doing the right things in the right order (chaining together pre-conditions and post-conditions towards a "goal" post condition).
Advantages of objects over ADTs
Adding representations in additive
Objects can inter-operate
It's possible to specify pre/post conditions for an object, and chain these together as is the case with ADTs. In this case, the advantages of objects are that (1) it's easy to change representations without changing the interface and (2) objects can inter-operate. However, this defeats the purpose of OOP in the sense of smalltalk. (see section "Alan Kay's version of OOP)
Dynamic dispatch is key to OOP
It should be apparent now that dynamic dispatch (i.e. late binding) is essential for object oriented programming. This is so that it's possible to define procedures in a generic way, that doesn't assume a particular representation. To be concrete - object oriented programming is easy in python, because it's possible to program methods of an object in a way that doesn't assume a particular representation. This is why python doesn't need interfaces like Java.
In Java, classes are ADTs. however, a class accessed through the interface it implements is an object.
Addendum: Alan Kay's version of OOP
Alan Kay explicitly referred to objects as "families of algebras", and Cook suggests that an ADT is an algebra. Hence Kay likely meant that an object is a family of ADTs. That is, an object is the collection of all classes that satisfy a Java interface.
However, the picture of objects painted by Cook is far more restrictive than Alan Kay's vision. He wanted objects to behave as computers in a network, or as biological cells. The idea was to apply the principle of least commitment to programming - so that it's easy to change low level layers of an ADT once the high level layers have been built using them. With this picture in mind, Java interfaces are too restrictive because they don't allow an object to interpret the meaning of a message, or even ignore it completely.
In summary, the key idea of objects, for Kay - is not that they are a family of algebras (as is emphasized by Cook). Rather, the key idea of Kay was to apply a model that worked in the large (computers in a network) to the small (objects in a program).
edit: Another clarification on Kay's version of OOP: The purpose of objects is to move closer to a declarative ideal. We should tell the object what to do - not tell it how by micromanaging is state, as is customary with procedural programming and ADTs. More info can be found here, here, here, and here.
edit: I found a very, very good exposition of Alan Kay's definition of OOP here.