admin管理员组

文章数量:1530018

synchronized是什么?

在java规范中是这样描述的:Java编程语言为线程间通信提供了多种机制。这些方法中最基本的是使用监视器实现的同步(Synchronized)。Java中的每个对象都是与监视器关联,线程可以锁定或解锁该监视器。一个线程一次只能锁住一个监视器。任何其他线程试图锁定监视器将被阻止,直到他们可以在监视器上获得锁为止。一个线程可以多次锁定特定监视器(可重入锁);每次解锁都会反转一次解锁的效果锁定操作。

 Synchronized是作用到Java对象的同步锁。具有互斥性和可重入性,互斥性是针对多个线程之间,而可重入性是指同一个线程​而言。

如果不具备可重入性,那么同一个线程二次获取锁的时候,会造成死锁。synchronized加锁后,解锁动作是自动的,不管同步代码是正常还是异常。 

synchronized的作用域​:

synchronized可以修饰成员方法,静态方法,代码块​。

Synchronized锁的特性:

1、原子性:确保线程互斥的访问同步代码,如同单线程环境,自然具有原子性。 

2、可见性:保证共享变量的修改能够及时可见,其实是通过Java内存模型中的 “对一个变量unlock操作之前,必须要同步到主内存中;如果对一个变量进行lock操作,则将会清空工作内存中此变量的值,在执行引擎使用此变量前,需要重新从主内存中load操作或assign操作初始化变量值” 来保证的; 

3、有序性:synchronized内的代码和外部的代码禁止排序,至于内部的代码,似乎不会禁止排序,但是由于是单线程的,根据Java代码单线程的有序性定义“不管怎么重排序,单线程的执行结果都不能改变”,所以没有任何影响! 

底层原理验证​:

public static void main(String[] args) {        synchronized (TestLock.class) {            System.out.println("1");        }    }

通过javap -c TestLock.class查看字节码得到如下: 

下面看同步方法的字节码 

public synchronized static void main(String[] args) {            System.out.println("1");    }

    ​    ​从字节码中可以看出,synchronized修饰的方法并没有monitorenter指令和monitorexit指令,取得代之的确是ACC_SYNCHRONIZED标识,该标识指明了该方法是一个同步方法,JVM通过该ACC_SYNCHRONIZED访问标志来辨别一个方法是否声明为同步方法,从而执行相应的同步调用。 

    ​    ​在JDK1.6之前的同步块和同步方法两个同步方式实际都是通过获取Monitor和释放Monitor来实现同步的,其实wait、notiy和notifyAll等方法也是依赖于Monitor对象的内部方法来完成的,这也就是为什么需要在同步方法或者同步代码块中调用的原因(需要先获取对象的锁,才能执行),否则会抛出java.lang.IllegalMonitorStateException的异常。

  在JDK1.6之前,synchronized属于重量级锁,效率低下,因为Monitor是依赖于底层的操作系统的互斥原语mutex来实现,JDK1.6之前实际上加锁时会调用Monitor的enter方法,解锁时会调用Monitor的exit方法,由于java的线程是映射到操作系统的原生线程之上的,如果要操作Monitor对象,都需要操作系统来帮忙完成,这会导致线程在“用户态和内核态”两个态之间来回切换,这个状态之间的转换需要相对比较长的时间,对性能有较大影响。

  庆幸的是在JDK1.6之后Java官方对从JVM层面对synchronized较大优化,所以现在的synchronized锁效率也优化得很不错了,JDK1.6之后,为了减少获得锁和释放锁所带来的性能消耗,为了减少这种重量级锁的使用,引入了轻量级锁和偏向锁,这两个锁可以不依赖Monitor的操作。 

偏向锁:

偏向锁是在单线程执行代码块时使用的机制,如果在多线程并发的环境下,则一定会转化为轻量级锁或者重量级锁。

引入偏向锁主要目的是:为了在没有多线程竞争的情况下尽量减少不必要的轻量级锁执行路径。因为轻量级锁、重量级锁的加锁、解锁操作是需要依赖多次CAS原子指令的,而偏向锁只需要在置换ThreadID的时候依赖一次CAS原子指令。

偏向锁的获取:

void BiasedLocking::revoke(Handle obj, TRAPS) {  assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint");  while (true) {    // We can revoke the biases of anonymously-biased objects    // efficiently enough that we should not cause these revocations to    // update the heuristics because doing so may cause unwanted bulk    // revocations (which are expensive) to occur.    //获取markword    markWord mark = obj->mark();    //是否有偏向锁字段    if (!mark.has_bias_pattern()) {      return;    }    if (mark.is_biased_anonymously()) {      // We are probably trying to revoke the bias of this object due to      // an identity hash code computation. Try to revoke the bias      // without a safepoint. This is possible if we can successfully      // compare-and-exchange an unbiased header into the mark word of      // the object, meaning that no other thread has raced to acquire      // the bias of the object.      markWord biased_value       = mark;      markWord unbiased_prototype = markWord::prototype().set_age(mark.age());      //cas设置偏向锁标志      markWord res_mark = obj->cas_set_mark(unbiased_prototype, mark);      if (res_mark == biased_value) {        return;      }      mark = res_mark;  // Refresh mark with the latest value.    } else {      Klass* k = obj->klass();      markWord prototype_header = k->prototype_header();      if (!prototype_header.has_bias_pattern()) {        // This object has a stale bias from before the bulk revocation        // for this data type occurred. It's pointless to update the        // heuristics at this point so simply update the header with a        // CAS. If we fail this race, the object's bias has been revoked        // by another thread so we simply return and let the caller deal        // with it.        obj->cas_set_mark(prototype_header.set_age(mark.age()), mark);        assert(!obj->mark().has_bias_pattern(), "even if we raced, should still be revoked");        return;      } else if (prototype_header.bias_epoch() != mark.bias_epoch()) {        // The epoch of this biasing has expired indicating that the        // object is effectively unbiased. We can revoke the bias of this        // object efficiently enough with a CAS that we shouldn't update the        // heuristics. This is normally done in the assembly code but we        // can reach this point due to various points in the runtime        // needing to revoke biases.        markWord res_mark;        markWord biased_value       = mark;        markWord unbiased_prototype = markWord::prototype().set_age(mark.age());        res_mark = obj->cas_set_mark(unbiased_prototype, mark);        if (res_mark == biased_value) {          return;        }        mark = res_mark;  // Refresh mark with the latest value.      }    }    HeuristicsResult heuristics = update_heuristics(obj());    if (heuristics == HR_NOT_BIASED) {      return;    } else if (heuristics == HR_SINGLE_REVOKE) {      JavaThread *blt = mark.biased_locker();      assert(blt != NULL, "invariant");      if (blt == THREAD) {        // A thread is trying to revoke the bias of an object biased        // toward it, again likely due to an identity hash code        // computation. We can again avoid a safepoint/handshake in this case        // since we are only going to walk our own stack. There are no        // races with revocations occurring in other threads because we        // reach no safepoints in the revocation path.        EventBiasedLockSelfRevocation event;        ResourceMark rm;        walk_stack_and_revoke(obj(), blt);        blt->set_cached_monitor_info(NULL);        assert(!obj->mark().has_bias_pattern(), "invariant");        if (event.should_commit()) {          post_self_revocation_event(&event, obj->klass());        }        return;      } else {        BiasedLocking::Condition cond = single_revoke_with_handshake(obj, (JavaThread*)THREAD, blt);        if (cond != NOT_REVOKED) {          return;        }      }    } else {      assert((heuristics == HR_BULK_REVOKE) ||         (heuristics == HR_BULK_REBIAS), "?");      EventBiasedLockClassRevocation event;      VM_BulkRevokeBias bulk_revoke(&obj, (JavaThread*)THREAD,                                    (heuristics == HR_BULK_REBIAS));      VMThread::execute(&bulk_revoke);      if (event.should_commit()) {        post_class_revocation_event(&event, obj->klass(), &bulk_revoke);      }      return;    }  }}

 偏向锁标记是存储在对象头的markword中的,​在源码中有对锁标记说明:

在运行期间,Mark Word里存储的数据会随着锁标志位的变化而变化。32虚拟机的不同状态下Mark Word的大概组成如下表:

 

轻量级锁获取:

源码位于src/hotspot/share/runtime/objectMonitor.cpp

void ObjectMonitor::enter(TRAPS) {  // The following code is ordered to check the most common cases first  // and to reduce RTS->RTO cache line upgrades on SPARC and IA32 processors.  //获取当前线程  Thread * const Self = THREAD;  //cas设置 owner的值 为当前线程  void * cur = Atomic::cmpxchg(&_owner, (void*)NULL, Self);  //之前没有线程 设置当前线程成功  if (cur == NULL) {  //设置重入次数0 _recursions = 0    assert(_recursions == 0, "invariant");    return;  }  //之前加锁的线程 就是当前线程  if (cur == Self) {    // TODO-FIXME: check for integer overflow!  BUGID 6557169.    //重试次数自增    _recursions++;    return;  }  if (Self->is_lock_owned((address)cur)) {    assert(_recursions == 0, "internal state error");    //重入次数设置为1    _recursions = 1;    // Commute owner from a thread-specific on-stack BasicLockObject address to    // a full-fledged "Thread *".    //更新owner为当前线程    _owner = Self;    return;  }//开始竞争锁  // We've encountered genuine contention.  assert(Self->_Stalled == 0, "invariant");  Self->_Stalled = intptr_t(this);//在入队列之前  尝试自旋  // Try one round of spinning *before* enqueueing Self  // and before going through the awkward and expensive state  // transitions.  The following spin is strictly optional ...  // Note that if we acquire the monitor from an initial spin  // we forgo posting JVMTI events and firing DTRACE probes.  if (TrySpin(Self) > 0) {    assert(_owner == Self, "must be Self: owner=" INTPTR_FORMAT, p2i(_owner));    assert(_recursions == 0, "must be 0: recursions=" INTX_FORMAT, _recursions);    assert(((oop)object())->mark() == markWord::encode(this),           "object mark must match encoded this: mark=" INTPTR_FORMAT           ", encoded this=" INTPTR_FORMAT, ((oop)object())->mark().value(),           markWord::encode(this).value());    Self->_Stalled = 0;    return;  }// 前面自旋 没有成功 开始进入重量级锁  assert(_owner != Self, "invariant");  assert(_succ != Self, "invariant");  assert(Self->is_Java_thread(), "invariant");  JavaThread * jt = (JavaThread *) Self;  assert(!SafepointSynchronize::is_at_safepoint(), "invariant");  assert(jt->thread_state() != _thread_blocked, "invariant");  assert(this->object() != NULL, "invariant");  assert(_contentions >= 0, "invariant");  // Prevent deflation at STW-time.  See deflate_idle_monitors() and is_busy().  // Ensure the object-monitor relationship remains stable while there's contention.  // _contentions 争用次数自增  Atomic::inc(&_contentions);  JFR_ONLY(JfrConditionalFlushWithStacktrace<EventJavaMonitorEnter> flush(jt);)  EventJavaMonitorEnter event;  if (event.should_commit()) {  // 设置类型指针    event.set_monitorClass(((oop)this->object())->klass());    //设置对象内存地址    event.set_address((uintptr_t)(this->object_addr()));  }//更改Java线程状态以指示在监视器进入时被阻止。  { // Change java thread status to indicate blocked on monitor enter.  //修改当前线程为阻塞状态    JavaThreadBlockedOnMonitorEnterState jtbmes(jt, this);    Self->set_current_pending_monitor(this);    DTRACE_MONITOR_PROBE(contended__enter, this, object(), jt);    if (JvmtiExport::should_post_monitor_contended_enter()) {      JvmtiExport::post_monitor_contended_enter(jt, this);//当前线程还没在队列里      // The current thread does not yet own the monitor and does not      // yet appear on any queues that would get it made the successor.      // This means that the JVMTI_EVENT_MONITOR_CONTENDED_ENTER event      // handler cannot accidentally consume an unpark() meant for the      // ParkEvent associated with this ObjectMonitor.    }    OSThreadContendState osts(Self->osthread());    ThreadBlockInVM tbivm(jt);    // TODO-FIXME: change the following for(;;) loop to straight-line code.    for (;;) {      jt->set_suspend_equivalent();      // cleared by handle_special_suspend_equivalent_condition()      // or java_suspend_self()//开始获取重量级别锁      EnterI(THREAD);      if (!ExitSuspendEquivalent(jt)) break;//我们已经获得了有竞争力的监视器,但是当我们等待另一个线程暂停我们时//我们不想在挂起时进入监视器,因为这会使挂起我们的线程感到惊讶。      // We have acquired the contended monitor, but while we were      // waiting another thread suspended us. We don't want to enter      // the monitor while suspended because that would surprise the      // thread that suspended us.      //      _recursions = 0;      _succ = NULL;      exit(false, Self);      jt->java_suspend_self();    }    Self->set_current_pending_monitor(NULL);    // We cleared the pending monitor info since we've just gotten past    // the enter-check-for-suspend dance and we now own the monitor free    // and clear, i.e., it is no longer pending. The ThreadBlockInVM    // destructor can go to a safepoint at the end of this block. If we    // do a thread dump during that safepoint, then this thread will show    // as having "-locked" the monitor, but the OS and java.lang.Thread    // states will still report that the thread is blocked trying to    // acquire it.  }  Atomic::dec(&_contentions);  assert(_contentions >= 0, "invariant");  Self->_Stalled = 0;  // Must either set _recursions = 0 or ASSERT _recursions == 0.  assert(_recursions == 0, "invariant");  assert(_owner == Self, "invariant");  assert(_succ != Self, "invariant");  assert(((oop)(object()))->mark() == markWord::encode(this), "invariant");  // The thread -- now the owner -- is back in vm mode.  // Report the glorious news via TI,DTrace and jvmstat.  // The probe effect is non-trivial.  All the reportage occurs  // while we hold the monitor, increasing the length of the critical  // section.  Amdahl's parallel speedup law comes vividly into play.  //  // Another option might be to aggregate the events (thread local or  // per-monitor aggregation) and defer reporting until a more opportune  // time -- such as next time some thread encounters contention but has  // yet to acquire the lock.  While spinning that thread could  // spinning we could increment JVMStat counters, etc.  DTRACE_MONITOR_PROBE(contended__entered, this, object(), jt);  if (JvmtiExport::should_post_monitor_contended_entered()) {    JvmtiExport::post_monitor_contended_entered(jt, this);    // The current thread already owns the monitor and is not going to    // call park() for the remainder of the monitor enter protocol. So    // it doesn't matter if the JVMTI_EVENT_MONITOR_CONTENDED_ENTERED    // event handler consumed an unpark() issued by the thread that    // just exited the monitor.  }  if (event.should_commit()) {    event.set_previousOwner((uintptr_t)_previous_owner_tid);    eventmit();  }  OM_PERFDATA_OP(ContendedLockAttempts, inc());}

从上面看出,轻量级锁其实就是利用cpu资源先自旋等待,涉及到自适应自旋,每次都会将线程重试获取锁的次数记录下来,用于下次获取自动自旋次数的参考,如果再自旋一定次数后成功获取锁则不会转换成重量级锁。

重量级锁​获取:

//获取重量级别锁void ObjectMonitor::EnterI(TRAPS) {//当前线程  Thread * const Self = THREAD;  assert(Self->is_Java_thread(), "invariant");  assert(((JavaThread *) Self)->thread_state() == _thread_blocked, "invariant");  // Try the lock - TATAS  //尝试获取一次锁  if (TryLock (Self) > 0) {    assert(_succ != Self, "invariant");    assert(_owner == Self, "invariant");    assert(_Responsible != Self, "invariant");    return;  }  assert(InitDone, "Unexpectedly not initialized");  // We try one round of spinning *before* enqueueing Self.  //  // If the _owner is ready but OFFPROC we could use a YieldTo()  // operation to donate the remainder of this thread's quantum  // to the owner.  This has subtle but beneficial affinity  // effects.//尝试自旋  if (TrySpin(Self) > 0) {    assert(_owner == Self, "invariant");    assert(_succ != Self, "invariant");    assert(_Responsible != Self, "invariant");    return;  }  // The Spin failed -- Enqueue and park the thread ...  assert(_succ != Self, "invariant");  assert(_owner != Self, "invariant");  assert(_Responsible != Self, "invariant");  // Enqueue "Self" on ObjectMonitor's _cxq.  // Node acts as a proxy for Self.  // As an aside, if were to ever rewrite the synchronization code mostly  // in Java, WaitNodes, ObjectMonitors, and Events would become 1st-class  // Java objects.  This would avoid awkward lifecycle and liveness issues,  // as well as eliminate a subset of ABA issues.  // TODO: eliminate ObjectWaiter and enqueue either Threads or Events.// 使当前线程 进入ObjectMonitor的_cxq。  ObjectWaiter node(Self);  Self->_ParkEvent->reset();  node._prev   = (ObjectWaiter *) 0xBAD;  //修改线程状态  node.TState  = ObjectWaiter::TS_CXQ;  // Push "Self" onto the front of the _cxq.  // Once on cxq/EntryList, Self stays on-queue until it acquires the lock.  // Note that spinning tends to reduce the rate at which threads  // enqueue and dequeue on EntryList|cxq.  ObjectWaiter * nxt;  //自旋  for (;;) {  //当前线程 放队头    node._next = nxt = _cxq;    // cas 设置队头 为当前线程    if (Atomic::cmpxchg(&_cxq, nxt, &node) == nxt) break;    //cxq队头改变了,可能其它线程释放了锁,再尝试获取一次锁    // Interference - the CAS failed because _cxq changed.  Just retry.    // As an optional optimization we retry the lock.    if (TryLock (Self) > 0) {      assert(_succ != Self, "invariant");      assert(_owner == Self, "invariant");      assert(_Responsible != Self, "invariant");      return;    }  }  //检查cxq | EntryList边缘过渡到非null。这表明竞争的开始。当争用持续存在时,  //退出线程将使用ST:MEMBAR:LD 1-1退出协议。当竞争减弱时,  //退出操作将恢复为更快的1-0模式。该进入操作可能会交织(竞赛)一个并发的1-0退出操作,  //从而导致搁浅,因此我们安排其中一个竞争线程使用定时的park()操作来检测比赛并从比赛中恢复过来。   //(搁浅是进度失败的一种形式,其中监视器已解锁,但所有竞争线程仍处于驻留状态)。   //也就是说,至少有一个竞争线程会定期轮询_owner。竞争线程之一将成为指定的“负责任”线程。   //负责线程使用定时驻留而不是常规的不确定驻留操作-它定期唤醒并检查1-0退出操作所允许的潜在搁浅并从中恢复。   //在任何给定时刻,每个监视器最多需要一个负责任的线程。仅cxq | EntryList上的线程可以负责监视。  // Check for cxq|EntryList edge transition to non-null.  This indicates  // the onset of contention.  While contention persists exiting threads  // will use a ST:MEMBAR:LD 1-1 exit protocol.  When contention abates exit  // operations revert to the faster 1-0 mode.  This enter operation may interleave  // (race) a concurrent 1-0 exit operation, resulting in stranding, so we  // arrange for one of the contending thread to use a timed park() operations  // to detect and recover from the race.  (Stranding is form of progress failure  // where the monitor is unlocked but all the contending threads remain parked).  // That is, at least one of the contended threads will periodically poll _owner.  // One of the contending threads will become the designated "Responsible" thread.  // The Responsible thread uses a timed park instead of a normal indefinite park  // operation -- it periodically wakes and checks for and recovers from potential  // strandings admitted by 1-0 exit operations.   We need at most one Responsible  // thread per-monitor at any given moment.  Only threads on cxq|EntryList may  // be responsible for a monitor.  //当前,竞争的线程之一承担了“负责任”的附加角色。一种可行的替代方法是使用专用的“绞线检查器”线程,  //该线程会定期在所有存在绞线风险的线程(或活动监视器)和未停放的后继线程上进行迭代。  //这将有助于消除我们在某些平台上看到的计时器可伸缩性问题,  //因为我们只有一个线程(检查程序)驻留在计时器上。  // Currently, one of the contended threads takes on the added role of "Responsible".  // A viable alternative would be to use a dedicated "stranding checker" thread  // that periodically iterated over all the threads (or active monitors) and unparked  // successors where there was risk of stranding.  This would help eliminate the  // timer scalability issues we see on some platforms as we'd only have one thread  // -- the checker -- parked on a timer.//如果两个队列都是空的 尝试设置负责的线程为 当前线程  if (nxt == NULL && _EntryList == NULL) {    // Try to assume the role of responsible thread for the monitor.    // CONSIDER:  ST vs CAS vs { if (Responsible==null) Responsible=Self }    Atomic::replace_if_null(&_Responsible, Self);  }  // The lock might have been released while this thread was occupied queueing  // itself onto _cxq.  To close the race and avoid "stranding" and  // progress-liveness failure we must resample-retry _owner before parking.  // Note the Dekker/Lamport duality: ST cxq; MEMBAR; LD Owner.  // In this case the ST-MEMBAR is accomplished with CAS().  int nWakeups = 0;  int recheckInterval = 1;//自旋  for (;;) {//尝试获取锁    if (TryLock(Self) > 0) break;    assert(_owner != Self, "invariant");    // park self    if (_Responsible == Self) {    //阻塞指定的时间      Self->_ParkEvent->park((jlong) recheckInterval);      // Increase the recheckInterval, but clamp the value.      recheckInterval *= 8;      if (recheckInterval > MAX_RECHECK_INTERVAL) {        recheckInterval = MAX_RECHECK_INTERVAL;      }    } else {    //阻塞自己  这里是操作系统重量级锁      Self->_ParkEvent->park();    }//尝试获取锁    if (TryLock(Self) > 0) break;    // The lock is still contested.    // Keep a tally of the # of futile wakeups.    // Note that the counter is not protected by a lock or updated by atomics.    // That is by design - we trade "lossy" counters which are exposed to    // races during updates for a lower probe effect.    // This PerfData object can be used in parallel with a safepoint.    // See the work around in PerfDataManager::destroy().    OM_PERFDATA_OP(FutileWakeups, inc());    //被唤醒的数量自增    ++nWakeups;    // Assuming this is not a spurious wakeup we'll normally find _succ == Self.    // We can defer clearing _succ until after the spin completes    // TrySpin() must tolerate being called with _succ == Self.    // Try yet another round of adaptive spinning.    //尝试自旋转    if (TrySpin(Self) > 0) break;    // We can find that we were unpark()ed and redesignated _succ while    // we were spinning.  That's harmless.  If we iterate and call park(),    // park() will consume the event and return immediately and we'll    // just spin again.  This pattern can repeat, leaving _succ to simply    // spin on a CPU.    if (_succ == Self) _succ = NULL;    // Invariant: after clearing _succ a thread *must* retry _owner before parking.    //    OrderAccess::fence();  }  // Egress :  // Self has acquired the lock -- Unlink Self from the cxq or EntryList.  // Normally we'll find Self on the EntryList .  // From the perspective of the lock owner (this thread), the  // EntryList is stable and cxq is prepend-only.  // The head of cxq is volatile but the interior is stable.  // In addition, Self.TState is stable.  assert(_owner == Self, "invariant");  assert(object() != NULL, "invariant");  // I'd like to write:  //   guarantee (((oop)(object()))->mark() == markWord::encode(this), "invariant") ;  // but as we're at a safepoint that's not safe.//当前线程 出队列 cxq  UnlinkAfterAcquire(Self, &node);  if (_succ == Self) _succ = NULL;  assert(_succ != Self, "invariant");  if (_Responsible == Self) {    _Responsible = NULL;    OrderAccess::fence(); // Dekker pivot-point    // We may leave threads on cxq|EntryList without a designated    // "Responsible" thread.  This is benign.  When this thread subsequently    // exits the monitor it can "see" such preexisting "old" threads --    // threads that arrived on the cxq|EntryList before the fence, above --    // by LDing cxq|EntryList.  Newly arrived threads -- that is, threads    // that arrive on cxq after the ST:MEMBAR, above -- will set Responsible    // non-null and elect a new "Responsible" timer thread.    //    // This thread executes:    //    ST Responsible=null; MEMBAR    (in enter epilogue - here)    //    LD cxq|EntryList               (in subsequent exit)    //    // Entering threads in the slow/contended path execute:    //    ST cxq=nonnull; MEMBAR; LD Responsible (in enter prolog)    //    The (ST cxq; MEMBAR) is accomplished with CAS().    //    // The MEMBAR, above, prevents the LD of cxq|EntryList in the subsequent    // exit operation from floating above the ST Responsible=null.  }  // We've acquired ownership with CAS().  // CAS is serializing -- it has MEMBAR/FENCE-equivalent semantics.  // But since the CAS() this thread may have also stored into _succ,  // EntryList, cxq or Responsible.  These meta-data updates must be  // visible __before this thread subsequently drops the lock.  // Consider what could occur if we didn't enforce this constraint --  // STs to monitor meta-data and user-data could reorder with (become  // visible after) the ST in exit that drops ownership of the lock.  // Some other thread could then acquire the lock, but observe inconsistent  // or old monitor meta-data and heap data.  That violates the JMM.  // To that end, the 1-0 exit() operation must have at least STST|LDST  // "release" barrier semantics.  Specifically, there must be at least a  // STST|LDST barrier in exit() before the ST of null into _owner that drops  // the lock.   The barrier ensures that changes to monitor meta-data and data  // protected by the lock will be visible before we release the lock, and  // therefore before some other thread (CPU) has a chance to acquire the lock.  // See also: http://gee.cs.oswego.edu/dl/jmm/cookbook.html.  //  // Critically, any prior STs to _succ or EntryList must be visible before  // the ST of null into _owner in the *subsequent* (following) corresponding  // monitorexit.  Recall too, that in 1-0 mode monitorexit does not necessarily  // execute a serializing instruction.  return;}

重量级锁获取会使用到两个队列,如果获取锁失败会进入等待队列并阻塞,可以想到aqs的实现也是通过队列来实现线程等待获取锁。

这2个队列是2q算法,减少并发访问队列带来的开销,提升性能。

请注意,即使少量固定自旋也会大大减少EntryList | cxq上的入队出队操作。

也就是说,自旋减轻了对“内部”锁的争用并监视元数据。

Cxq指向尝试进入的“最近到达的线程”集。因为我们使用CAS将线程推入_cxq,所以RAT必须采用单链接LIFO的形式。 

c.f.迈克尔·斯科特(Michael Scott)的“ 2Q”算法: 一个关键的目标是最大程度地减少在保持监视器锁定时发生的队列和监视器元数据操作-也就是说,我们希望将监视器锁定保持时间最小化。

释放重量级锁:

void ObjectMonitor::exit(bool not_suspended, TRAPS) {  Thread * const Self = THREAD;  //检查锁状态  if (THREAD != _owner) {    if (THREAD->is_lock_owned((address) _owner)) {      // Transmute _owner from a BasicLock pointer to a Thread address.      // We don't need to hold _mutex for this transition.      // Non-null to Non-null is safe as long as all readers can      // tolerate either flavor.      assert(_recursions == 0, "invariant");      _owner = THREAD;      _recursions = 0;    } else {      // Apparent unbalanced locking ...      // Naively we'd like to throw IllegalMonitorStateException.      // As a practical matter we can neither allocate nor throw an      // exception as ::exit() can be called from leaf routines.      // see x86_32.ad Fast_Unlock() and the I1 and I2 properties.      // Upon deeper reflection, however, in a properly run JVM the only      // way we should encounter this situation is in the presence of      // unbalanced JNI locking. TODO: CheckJNICalls.      // See also: CR4414101#ifdef ASSERT      LogStreamHandle(Error, monitorinflation) lsh;      lsh.print_cr("ERROR: ObjectMonitor::exit(): thread=" INTPTR_FORMAT                    " is exiting an ObjectMonitor it does not own.", p2i(THREAD));      lsh.print_cr("The imbalance is possibly caused by JNI locking.");      print_debug_style_on(&lsh);#endif      assert(false, "Non-balanced monitor enter/exit!");      return;    }  }//重入次数 减少1  if (_recursions != 0) {    _recursions--;        // this is simple recursive enter    return;  }  // Invariant: after setting Responsible=null an thread must execute  // a MEMBAR or other serializing instruction before fetching EntryList|cxq.  _Responsible = NULL;#if INCLUDE_JFR  // get the owner's thread id for the MonitorEnter event  // if it is enabled and the thread isn't suspended  if (not_suspended && EventJavaMonitorEnter::is_enabled()) {  //设置上一次获取锁的线程    _previous_owner_tid = JFR_THREAD_ID(Self);  }#endif  for (;;) {    assert(THREAD == _owner, "invariant");    // release semantics: prior loads and stores from within the critical section    // must not float (reorder) past the following store that drops the lock.    //将 _owner 设置为Null    Atomic::release_store(&_owner, (void*)NULL);   // drop the lock    //内存屏障 后面的读 可以读取到同步代码之前所有的修改    OrderAccess::storeload();    //看看我们是否需要唤醒后继者    // See if we need to wake a successor    if ((intptr_t(_EntryList)|intptr_t(_cxq)) == 0 || _succ != NULL) {      return;    }    // Other threads are blocked trying to acquire the lock.//通常情况下,退出线程负责确保继承, 确定下一个要唤醒的线程    // Normally the exiting thread is responsible for ensuring succession,    // 可能其它线程正在自旋进入    //但是如果其他继任者准备就绪或其他进入线程正在旋转,    //则此线程可以简单地将NULL存储到_owner中并退出而无需唤醒继任者    // but if other successors are ready or other entering threads are spinning    // then this thread can simply store NULL into _owner and exit without    // waking a successor.  The existence of spinners or ready successors    // guarantees proper succession (liveness).  Responsibility passes to the    // ready or running successors.  The exiting thread delegates the duty.    // More precisely, if a successor already exists this thread is absolved    // of the responsibility of waking (unparking) one.    //    // The _succ variable is critical to reducing futile wakeup frequency.    // _succ identifies the "heir presumptive" thread that has been made    // ready (unparked) but that has not yet run.  We need only one such    // successor thread to guarantee progress.    // See http://www.usenix/events/jvm01/full_papers/dice/dice.pdf    // section 3.3 "Futile Wakeup Throttling" for details.    //    // Note that spinners in Enter() also set _succ non-null.    // In the current implementation spinners opportunistically set    // _succ so that exiting threads might avoid waking a successor.    // Another less appealing alternative would be for the exiting thread    // to drop the lock and then spin briefly to see if a spinner managed    // to acquire the lock.  If so, the exiting thread could exit    // immediately without waking a successor, otherwise the exiting    // thread would need to dequeue and wake a successor.    // (Note that we'd need to make the post-drop spin short, but no    // shorter than the worst-case round-trip cache-line migration time.    // The dropped lock needs to become visible to the spinner, and then    // the acquisition of the lock by the spinner must become visible to    // the exiting thread).    // It appears that an heir-presumptive (successor) must be made ready.    // Only the current lock owner can manipulate the EntryList or    // drain _cxq, so we need to reacquire the lock.  If we fail    // to reacquire the lock the responsibility for ensuring succession    // falls to the new owner.    //    //替换为空失败 其它线程已经在抢占锁了    if (!Atomic::replace_if_null(&_owner, THREAD)) {      return;    }    guarantee(_owner == THREAD, "invariant");    ObjectWaiter * w = NULL;    w = _EntryList;    if (w != NULL) {      // I'd like to write: guarantee (w->_thread != Self).      // But in practice an exiting thread may find itself on the EntryList.      // Let's say thread T1 calls O.wait().  Wait() enqueues T1 on O's waitset and      // then calls exit().  Exit release the lock by setting O._owner to NULL.      // Let's say T1 then stalls.  T2 acquires O and calls O.notify().  The      // notify() operation moves T1 from O's waitset to O's EntryList. T2 then      // release the lock "O".  T2 resumes immediately after the ST of null into      // _owner, above.  T2 notices that the EntryList is populated, so it      // reacquires the lock and then finds itself on the EntryList.      // Given all that, we have to tolerate the circumstance where "w" is      // associated with Self.      assert(w->TState == ObjectWaiter::TS_ENTER, "invariant");      ExitEpilog(Self, w);      return;    }    // If we find that both _cxq and EntryList are null then just    // re-run the exit protocol from the top.    w = _cxq;    if (w == NULL) continue;    // Drain _cxq into EntryList - bulk transfer.    // First, detach _cxq.    // The following loop is tantamount to: w = swap(&cxq, NULL)    for (;;) {      assert(w != NULL, "Invariant");      ObjectWaiter * u = Atomic::cmpxchg(&_cxq, w, (ObjectWaiter*)NULL);      if (u == w) break;      w = u;    }    assert(w != NULL, "invariant");    assert(_EntryList == NULL, "invariant");    // Convert the LIFO SLL anchored by _cxq into a DLL.    // The list reorganization step operates in O(LENGTH(w)) time.    // It's critical that this step operate quickly as    // "Self" still holds the outer-lock, restricting parallelism    // and effectively lengthening the critical section.    // Invariant: s chases t chases u.    // TODO-FIXME: consider changing EntryList from a DLL to a CDLL so    // we have faster access to the tail.    _EntryList = w;    ObjectWaiter * q = NULL;    ObjectWaiter * p;    for (p = w; p != NULL; p = p->_next) {      guarantee(p->TState == ObjectWaiter::TS_CXQ, "Invariant");      p->TState = ObjectWaiter::TS_ENTER;      p->_prev = q;      q = p;    }    // In 1-0 mode we need: ST EntryList; MEMBAR #storestore; ST _owner = NULL    // The MEMBAR is satisfied by the release_store() operation in ExitEpilog().    // See if we can abdicate to a spinner instead of waking a thread.    // A primary goal of the implementation is to reduce the    // context-switch rate.    if (_succ != NULL) continue;    w = _EntryList;    if (w != NULL) {      guarantee(w->TState == ObjectWaiter::TS_ENTER, "invariant");      ExitEpilog(Self, w);      return;    }  }}

​从源码可知synchronized锁获取流程如下图:

通过查阅synchronized源码发现实现还是很巧妙的,而且他们也不断在优化代码,为了synchronized不断提升性能做出了巨大贡献,作为开发者对这种追求极致的品质真的很佩服。

2022坚持学习,一起进步✌️

本文标签: 知识Javasynchronized