This article presents a remarkably high-density and energy-efficient analog-digital hybrid computing-in-memory (CIM) processor for ternary neural network (TNN) acceleration, utilizing a transpose ternary embedded DRAM bitcell. The proposed CIM processor significantly improves computational robustness and energy efficiency at both the macro- and system-level through key innovations: 1) current-mode vertical analog multiplication-and-accumulation (MAC) with gate voltage biasing in bitcell, reducing MAC variation by 87% under process, voltage, and temperature (PVT) variation; 2) ternary-bit per cycle (TPC) successive approximation register (SAR) analog-to-digital converter (ADC) with a shared capacitor digital-to-analog converter (CDAC), minimizing ADC area overhead to 15% and improving ADC efficiency by 1.49x ; 3) horizontal digital partial sum (Psum) logic for area- and power-efficient Psum among MACs, reducing area by 39% and power by 57% compared to conventional full adder; and 4) input channel-first tiled-convolution that substantially enhances system energy efficiency by eliminating inter-macro data transactions, decreasing the network-on-chip power overhead to 2%. Fabricated in 28 nm CMOS technology, the proposed CIM processor achieves 1.58Mb/mm(2) of cell density, attaining 478 TOPS/W and 273.48TOPS/W of macro and system energy efficiencies, respectively, outperforming state-of-the-art CIM processors.