Spin locks are the simplest means of achieving mutex in multicores. But they are expensive since they incure a lot of cache coherence bus traffic, making them non-scalable. Alternatives are Queued locks and Ticket spin locks (which is used in Linux kernel).
The functions associated with spinlocks are syntactically equivalent to the ones mentioned for mutexes, the subtle difference lays in the way the wait for the lock is handled. Contrary to mutexes, threads are not put to sleep on spinlocks, but instead continue to spin (that means trying to acquire the lock in this context). Therefore, they usually have a quicker response time (as no thread needs to be woken up as soon as the lock is unlocked), but also do waste processor cycles (a process commonly known as busy waiting). Spinlocks are very common in High Performance Computing bcoz in this field most of the time each thread is scheduled on its own processor anyways and therefore there is not much to gain by putting threads to sleep (which is a quite time-consuming process after all).
It depends on your threading system, which lock-variant is provided to you. Pthreads for example used to have only mutexes but now provides both. OpenMP is silent about the issue in the specification therefore the compiler vendors are free to use whatever variety they wish (as far as I remember, many use a mixture of both, where a thread is spinning for a predefined amount of time and if the lock could not be acquired then, it is put to sleep). Check the documentation of your threading system for details about your locks, with the information provided above it should be pretty easy to find out.