Samsung HRDC · Capstone Project

Diagnosing Samsung's CIC learning platform — a data-driven case for LXP transition.

Illustrative data. Specific figures shown throughout have been adjusted from the original confidential analysis. The original report was written in Korean for Samsung internal use. All directional insights and strategic recommendations are accurate.

CIC is Samsung's enterprise-wide learning platform serving employees across multiple subsidiaries. This capstone project combined systematic content analysis with ARCS-based UX diagnosis and LXP transition strategy — proposing data-grounded improvements to close the gap between what CIC offers and what employees actually engage with.

Needs Analysis Content Data Analysis ARCS Model LMS → LXP Strategy UX Diagnosis
CIC — Samsung Learning Platform
Feed
Content
Class
Campus
Curated
AI Picks
Trending
Today's Content
AI
AI Insights Weekly
Low views · High eng. ↑
Book
My Life's Book
High views · Low eng.
Lead
Leadership Lab
Mid views · Avg. eng.
Top Views
~15K
AI Content
Top Eng.
~7%
Low-view item
Avg. Eng.
~3%
200+ items
Role
HRD Intern
Samsung HRDC
Scope
CIC Platform
200+ contents analyzed
Framework
ARCS · LXP Strategy
Nielsen UX Heuristics
Language
Originally Korean
Translated for portfolio
01 · Problem

A platform caught between LMS and LXP — with user behavior telling a different story.

Post-pandemic, Samsung employees' learning expectations shifted. Accustomed to YouTube, Netflix, and short-form content platforms, users now expect personalized recommendations, intuitive interfaces, and content that respects their time. CIC had begun incorporating LXP-like features — feeds, campus communities, curated playlists. But the core architecture still reflected a compliance-first LMS.

How do we move from a platform that manages learning to one that drives it — using real content performance data to close the gap between what CIC offers and what employees actually engage with?

CIC Platform UI — annotated screenshots showing feed, content, and campus views
CIC Platform UI — Feed, Content, and Campus views with UX annotations. Original interface in Korean; annotated during field observation phase.
A · UI/UX Friction
Close button misplaced in feed popups; "currently watching" buried at bottom; thumbnail text illegibility; mobile touch not optimized.
B · Curation Gaps
Majority of content exceeds 10 minutes; short-form absent; tag search inaccurate; recommendations lack explanatory context.
C · AI & Personalization
No contextual search; no personalized recommendation engine; campus content lacks connection flow, breaking learning journeys.
D · Motivation Architecture
Points and badges exist but rewards are invisible; no internal expert or creator incentive structure to drive content sharing.
02 · Analysis

200+ contents. Four counterintuitive findings.

Figures below are illustrative. Actual data from the original Korean-language report is confidential. Engagement rate = (likes + comments + shares) / views.

CIC platform content was systematically analyzed via HTML source extraction — capturing views, engagement signals, content length, and section classification across 200+ published items.

Finding 01
The Views–Engagement Paradox
High-view sections and high-engagement sections showed opposite patterns. The curation algorithm surfaced popular content, not satisfying content.
Hidden gem: ~200 views → ~7% engagement (rank #1 in satisfaction)
Finding 02
AI Content: Consistent High Performance
Every AI-related section maintained 4%+ engagement consistently — well above platform average — yet supply was severely limited relative to demand.
AI content: ~40% higher engagement vs. platform avg.
Finding 03
Passive Consumption in Reading Content
Book/reading content had the second-highest views but one of the lowest engagement rates — users browse but don't interact or share.
High views / Low engagement → passive scrolling behavior
Finding 04
Social Proof Works at Scale
"What people are watching now" maintained strong engagement across a large content pool — social context drives selection.
Large-pool section: ~4.5% avg. engagement confirmed
Section Performance: Views vs. Engagement Rate (illustrative)
Avg. Views
Engagement %
The paradox: The section with the highest engagement score received among the fewest views — buried by a view-count-based algorithm. The most satisfying content was the least visible.
03 · Process

From platform observation to strategic diagnosis.

The project followed a structured consulting-style process: field observation, data collection, theoretical framework application, empirical analysis, and strategy formulation. Click each step to expand.

Step 01
Platform Observation & Data Collection
Conducted systematic observation of CIC as an active intern user. Extracted content metadata from 200+ published items via HTML source analysis — capturing views, engagement signals (likes, comments, shares), duration, upload date, and section. Built a structured dataset for quantitative analysis.
Step 02
ARCS Framework Diagnostic
Applied Keller's ARCS model (Attention, Relevance, Confidence, Satisfaction) to assess CIC's learning motivation architecture. Mapped each platform feature to an ARCS element, identifying structural gaps that the quantitative data later confirmed empirically.
Step 03
LMS → LXP Literature Review
Reviewed LXP transition frameworks, Nielsen's usability heuristics, and mobile-first design principles. Benchmarked CIC against leading LXP platforms (Degreed, EdCast, Viva Learning) to identify specific experience gaps relevant to enterprise-scale deployment at Samsung.
Step 04
Empirical Content Analysis
Calculated engagement rates across all items and aggregated by section. Cross-referenced quantitative findings with qualitative UX observations to surface the views–engagement paradox and identify high-ROI content categories. Written and analyzed entirely in Korean for Samsung internal stakeholders.
Step 05
Strategy Formulation & Roadmap
Translated findings into a phased action roadmap — categorizing improvements into immediate algorithmic fixes, medium-term recommendation engine development, and long-term LXP transition architecture with personalized learning paths.
ARCS Diagnostic Summary
ARCSCIC StrengthGap IdentifiedData Signal
Attention Curated content, feed/vlog features Non-intuitive UI; no short-form entry; high-engagement content hidden Top-engagement section buried at low visibility rank
Relevance Some role-based content; AI trend attempts Global content lacking; tag accuracy low; no personalized curation AI sections: consistent high engagement, severe undersupply
Confidence Campus communities; "currently watching" AI recommendation missing; re-entry UX buried; learning path unclear Social proof section: strong engagement confirmed at scale
Satisfaction Badge system; feed sharing culture Reward visibility near zero; no expert/creator incentives Reading: high views / low engagement → passive consumption
04 · Recommendations

Three data-backed strategies. One phased roadmap.

All recommendations are grounded directly in the content dataset. The overarching principle: move from a view-count-driven platform to an engagement-quality-driven learning experience.

Core Strategic Proposals
Strategy 01
Hybrid Curation Algorithm

Replace view-count-only ranking with a weighted hybrid: Score = (engagement rate × 0.7) + (normalized views × 0.3). Surfaces hidden high-quality content while maintaining broad discovery. Immediately deployable without new infrastructure.

Strategy 02
AI Content Hub Expansion

AI-related content shows ~40% higher engagement than platform average yet supply is severely limited. Expand to a dedicated AI Learning Hub, integrate real-time trend content, and position AI sections as primary entry points for upskilling-motivated learners.

Strategy 03
"Hidden Gem" Auto-Discovery

Create an automated flag for content with high engagement + low views for featured placement. A "You might have missed this" section surfaces high-quality underexposed content before the view-based algorithm buries it further.

Implementation Roadmap
Short-term · Now
Engagement-based exposure fixSurface top-engagement content above fold; create Hidden Gem sectionEvidence: views–engagement gap
AI content expansionExpand AI content supply; add weekly trend contentEvidence: consistent 4%+ engagement
Mid-term
Hybrid recommendation algo70% engagement + 30% views weighting deployedCloses views–engagement gap
Social proof enhancement"Why recommended" + "colleagues like you watched"Evidence: social proof effect confirmed
Long-term · LXP Vision
Reward visibility systemEngagement-based recognition; AI Creator programAddress passive consumption
Full LXP transitionPersonalized learning paths; individual development planningIntegrated data strategy
05 · Outcome

Evidence-based recommendations for a platform serving thousands of Samsung employees.

200+
Contents systematically analyzed across CIC platform
~40%
Higher engagement in AI content vs. platform average
Engagement gap between top and average content sections
  • Delivered a comprehensive diagnostic report (in Korean) revealing the views–engagement paradox — demonstrating that Samsung's curation algorithm actively surfaced less satisfying content to more users.
  • Applied ARCS model across all four motivation dimensions, backed by empirical content data — producing theory-grounded, actionable recommendations rather than intuition-based suggestions.
  • Proposed a hybrid recommendation algorithm as an immediately deployable fix, requiring no new infrastructure — a practical constraint given enterprise system complexity.
  • Identified AI content as Samsung's highest-ROI content investment opportunity — consistent high engagement across all AI sections with severe under-supply relative to employee demand.
  • Produced a phased LXP transition roadmap spanning immediate UX fixes, mid-term algorithm improvements, and long-term personalized learning path architecture.
Analyst's Reflection

The most counterintuitive moment came when the data showed the highest-engagement section — one I'd initially dismissed as generic — sitting at the bottom of the visibility ranking. That single finding reframed the entire project: the problem wasn't content quality, it was measurement. A platform optimizing for the wrong metric will systematically reward the wrong content. That's a learning system design problem, not a content problem — and it's exactly the gap that instructional designers are positioned to close.