This article presents a sparsity-aware analog-digital hybrid embedded dynamic random access memory (eDRAM) computing-in-memory (CIM) processor for highly energy-efficient deep neural network (DNN) acceleration. Although CIM architectures execute multiplication-and-accumulation (MAC) more efficiently than von Neumann architectures, their practical energy efficiency for DNN acceleration remains limited due to several challenges. First, prior CIMs struggled to exploit massive sparsity because of their highly parallelized structures. Second, CIMs typically adopt computation in either the analog or the digital domain, facing fundamental trade-offs between analog-to-digital converter (ADC) overhead and throughput. Third, system throughput is degraded due to 1) workload imbalance among CIM macros during sparsity-aware computation caused by a random sparsity pattern, which ruins CIM macro utilization and 2)frequent refresh and weight-update operations that have plagued prior eDRAM CIMs. To address these challenges, the proposed eDRAM CIM processor introduces four key features: 1) input activation (IA) grouping convolution, which completely skips zero-weight computations by activating only the effective rows of the CIM macro, increasing the effective computation ratio by 4.59 & times; ; 2) a hybrid-CIM macro integrated with SAR-Flash ADC (SF-ADC) and reversed-MAC near-memory logic (RM-NML) for energy-efficient MAC operations in both the analog and the digital domains, improving macro efficiency by 2.39 & times; ; 3) sparsity-aware proactive scheduling (SPS) to maximize CIM macro utilization, reducing system latency by 10.4%; and 4)in-macro Multi-row-Multi-task (MRMT) control that enables concurrent refresh/update during in-memory computation, resulting in a 22.0% reduction in system latency and a 1.3 & times; increase in system energy efficiency. Fabricated in a 28 nm CMOS process, the proposed processor demonstrates high-energy efficiency across various benchmarks, outperforming previous CIM processors by 1.55 & times; and 10.37 & times; for ResNet-18 and VGGNet-16, respectively.