Answer

Yes, `synchronized` pins virtual threads to carrier threads, and this is exactly what you're seeing. Here's the fix strategy: ## Why synchronized pins virtual threads Virtual threads are multiplexed on a small pool of carrier threads (typically equal to `java.lang.Thread.preferredThreadCount()`, usually the number of CPU cores). The JVM *can* unmount a virtual thread mid-execution to let another virtual thread use its carrier thread — **except when the virtual thread is inside a `synchronized` block**. This is a JVM safety constraint: `synchronized` uses the object's monitor (a low-level OS construct), and the JVM can't safely migrate ownership of a monitor across kernel threads. So it pins the virtual thread to the carrier thread for the entire `synchronized` scope. Under load, if many virtual threads hit the same `synchronized` method/block, they compete for the single carrier thread and pile up waiting for the monitor. Even though you have thousands of virtual threads, they're effectively serializing on that one carrier thread — which explains your deadlock. ## The solution: systematic ReentrantLock migration **Short answer:** Yes, replace `synchronized` with `ReentrantLock`, but do it strategically. ```java // Before: pins virtual thread public synchronized Data query() { return db.fetch(id) } // After: doesn't pin private final Lock queryLock = new ReentrantLock() public Data query() { queryLock.lock() try { return db.fetch(id) } finally { queryLock.unlock() } } ``` `ReentrantLock` is a Java-level construct, not a kernel monitor. The JVM can unmount your virtual thread even if you're inside the critical section, so the carrier thread is free to run other virtual threads. ## Priority order for migration 1. **High-traffic locks first** — any `synchronized` that's hit frequently under load. The contention is where virtual threads suffer most. 2. **Third-party libraries** — JDBC drivers, connection pools, caching libraries. You can't change their code. Mitigate by: - Wrapping JDBC calls in a bounded thread pool (use `ForkJoinPool.commonPool()` or a small `ExecutorService`), isolating the pinned threads so they don't starve your virtual thread pool - Use connection pool libraries that are virtual-thread-aware (HikariCP 6.0+, Quarkus datasource, Jakarta EE connection pools all have virtual thread support) 3. **Legacy internal code** — slower priority; migrate incrementally. ## JDBC driver mitigation (critical) JDBC is a common culprit. Most drivers use `synchronized` internally on connection state, statement execution, and result set iteration. If you can't update the driver immediately, isolate JDBC calls: ```java private final ExecutorService jdbcExecutor = Executors.newFixedThreadPool( Math.max(4, Runtime.getRuntime().availableProcessors() / 2), Thread.ofVirtual().name("jdbc-", 0).factory() // still use virtual threads for isolation ); public Data query(String sql) { return jdbcExecutor.submit(() -> { var conn = dataSource.getConnection(); try { // JDBC driver pinning happens here, but on a dedicated pool var stmt = conn.prepareStatement(sql); var rs = stmt.executeQuery(); // ... } finally { conn.close(); } }).get(); // blocks the virtual thread, but carrier thread is free for others } ``` This contains the pinning damage — the JDBC pool threads are pinned, but your thousands of virtual threads keep running on other carrier threads. ## Spring Boot integration With Spring Boot 3.2+ and virtual threads enabled, be explicit about thread pools: ```yaml spring: threads: virtual: enabled: true datasource: hikari: maximum-pool-size: 10 # keep this small; virtual threads don't need large pools thread-factory: # tell HikariCP to use platform threads, not virtual ones ``` And update your config to avoid auto-proxying pooling: ```java @Configuration public class VirtualThreadConfig { @Bean public DataSource dataSource(HikariConfig config) { // Use platform threads for the connection pool config.setThreadFactory(Thread.ofPlatform().factory()); return new HikariDataSource(config); } } ``` ## Monitoring to confirm the fix Use JFR (Java Flight Recorder) to visualize the pinning: ```bash jcmd JFR.start duration=60s filename=report.jfr jcmd JFR.dump filename=final.jfr jcmd JFR.stop ``` In the flamegraph, look for `VirtualThread.parkNanos` with high duration — that's a pinned thread waiting. After fixing `synchronized` and JDBC, those stacks should disappear. ## TL;DR 1. **Migrate high-contention `synchronized` → `ReentrantLock`** 2. **Isolate third-party library calls that use `synchronized`** (especially JDBC) in a bounded thread pool to prevent starvation 3. **Use virtual-thread-aware libraries** (HikariCP 6.0+, etc.) 4. **Monitor with JFR** to find remaining pinning hotspots This is the exact pattern Netflix, Google, and other large-scale Java shops are using for Project Loom migration.

e35aec28-55fc-42f5-819d-b48e446abca3

Yes, synchronized pins virtual threads to carrier threads, and this is exactly what you're seeing. Here's the fix strategy:

Why synchronized pins virtual threads

Virtual threads are multiplexed on a small pool of carrier threads (typically equal to java.lang.Thread.preferredThreadCount(), usually the number of CPU cores). The JVM can unmount a virtual thread mid-execution to let another virtual thread use its carrier thread — except when the virtual thread is inside a synchronized block.

This is a JVM safety constraint: synchronized uses the object's monitor (a low-level OS construct), and the JVM can't safely migrate ownership of a monitor across kernel threads. So it pins the virtual thread to the carrier thread for the entire synchronized scope.

Under load, if many virtual threads hit the same synchronized method/block, they compete for the single carrier thread and pile up waiting for the monitor. Even though you have thousands of virtual threads, they're effectively serializing on that one carrier thread — which explains your deadlock.

The solution: systematic ReentrantLock migration

Short answer: Yes, replace synchronized with ReentrantLock, but do it strategically.

// Before: pins virtual thread
public synchronized Data query() {
  return db.fetch(id)
}

// After: doesn't pin
private final Lock queryLock = new ReentrantLock()

public Data query() {
  queryLock.lock()
  try {
    return db.fetch(id)
  } finally {
    queryLock.unlock()
  }
}

ReentrantLock is a Java-level construct, not a kernel monitor. The JVM can unmount your virtual thread even if you're inside the critical section, so the carrier thread is free to run other virtual threads.

Priority order for migration

  1. High-traffic locks first — any synchronized that's hit frequently under load. The contention is where virtual threads suffer most.
  2. Third-party libraries — JDBC drivers, connection pools, caching libraries. You can't change their code. Mitigate by:
    • Wrapping JDBC calls in a bounded thread pool (use ForkJoinPool.commonPool() or a small ExecutorService), isolating the pinned threads so they don't starve your virtual thread pool
    • Use connection pool libraries that are virtual-thread-aware (HikariCP 6.0+, Quarkus datasource, Jakarta EE connection pools all have virtual thread support)
  3. Legacy internal code — slower priority; migrate incrementally.

JDBC driver mitigation (critical)

JDBC is a common culprit. Most drivers use synchronized internally on connection state, statement execution, and result set iteration. If you can't update the driver immediately, isolate JDBC calls:

private final ExecutorService jdbcExecutor = 
  Executors.newFixedThreadPool(
    Math.max(4, Runtime.getRuntime().availableProcessors() / 2),
    Thread.ofVirtual().name("jdbc-", 0).factory() // still use virtual threads for isolation
  );

public Data query(String sql) {
  return jdbcExecutor.submit(() -> {
    var conn = dataSource.getConnection();
    try {
      // JDBC driver pinning happens here, but on a dedicated pool
      var stmt = conn.prepareStatement(sql);
      var rs = stmt.executeQuery();
      // ...
    } finally {
      conn.close();
    }
  }).get(); // blocks the virtual thread, but carrier thread is free for others
}

This contains the pinning damage — the JDBC pool threads are pinned, but your thousands of virtual threads keep running on other carrier threads.

Spring Boot integration

With Spring Boot 3.2+ and virtual threads enabled, be explicit about thread pools:

spring:
  threads:
    virtual:
      enabled: true
  datasource:
    hikari:
      maximum-pool-size: 10 # keep this small; virtual threads don't need large pools
      thread-factory: # tell HikariCP to use platform threads, not virtual ones

And update your config to avoid auto-proxying pooling:

@Configuration
public class VirtualThreadConfig {
  @Bean
  public DataSource dataSource(HikariConfig config) {
    // Use platform threads for the connection pool
    config.setThreadFactory(Thread.ofPlatform().factory());
    return new HikariDataSource(config);
  }
}

Monitoring to confirm the fix

Use JFR (Java Flight Recorder) to visualize the pinning:

jcmd 
 JFR.start duration=60s filename=report.jfr
jcmd 
 JFR.dump filename=final.jfr
jcmd 
 JFR.stop

In the flamegraph, look for VirtualThread.parkNanos with high duration — that's a pinned thread waiting. After fixing synchronized and JDBC, those stacks should disappear.

TL;DR

  1. Migrate high-contention synchronizedReentrantLock
  2. Isolate third-party library calls that use synchronized (especially JDBC) in a bounded thread pool to prevent starvation
  3. Use virtual-thread-aware libraries (HikariCP 6.0+, etc.)
  4. Monitor with JFR to find remaining pinning hotspots

This is the exact pattern Netflix, Google, and other large-scale Java shops are using for Project Loom migration.