← Back to portfolio
Machine Learning Case Study — Food Tech / Marketplace

Food Delivery Customer Intelligence: RFM Segmentation, Churn Propensity & Coupon Engine

A four-stage analytics pipeline that reframes an e-commerce customer dataset as a food delivery platform (iFood / UberEats / DoorDash style). The project covers domain-driven feature engineering (AOV, Churn target, RFM segmentation), descriptive analytics comparing subscription tiers and coupon efficiency, a Random Forest churn propensity model achieving ROC‑AUC 0.992, and a prescriptive coupon engine that prescribes the Next Best Action for 350 customers from “aggressive win-back” to “do not send — save margin.”

Python scikit-learn Random Forest RFM Segmentation Stratified K-Fold CV Ordinal Encoding Coupon Engine Matplotlib / Seaborn ROC-AUC

Business Problem

On a food delivery marketplace, acquiring a new customer costs 5–7× more than retaining an existing one. Yet most platforms still treat every customer the same: blast a coupon to the entire base when monthly GMV dips, regardless of whether those customers were about to churn or would have ordered anyway. The result is double-digit margin dilution with no measurable retention benefit.

The goal here is to build the intelligence layer that prevents that. Specifically: segment customers by behavioural health (RFM), understand which subscription tier and coupon patterns signal churn risk, build a model that scores every customer’s churn probability without peeking at recency directly, and translate those scores into a prescriptive “Next Best Action” that the CRM team can execute.

Dataset: E-commerce customer behaviour dataset with 350 customers. Columns are remapped to food-delivery domain terminology — Items Purchased becomes Order Frequency, Total Spend becomes Lifetime GMV, Membership Type becomes Subscription Tier, and so on. The same modelling logic applies directly to any iFood, Rappi, or DoorDash customer table.

Pipeline Architecture & Tools

The pipeline runs in four sequential stages, each feeding the next: feature engineering creates the KPIs, descriptive analytics surfaces the business patterns, the model quantifies churn risk, and the coupon engine translates risk scores into actions.

Feature Engineering — AOV, Churn, RFM

Three engineered constructs power the entire pipeline. aov (Average Order Value = Lifetime GMV ÷ Order Frequency) is the canonical marketplace ticket metric. churn applies the business rule “Days Since Last Purchase > 30” to create a binary target. RFM scoring assigns each customer a 1–3 tertile on Recency, Frequency, and Monetary value, then maps the composite score to four operational segments: Champions, Loyal, At Risk, and Churned.

Descriptive Analytics — Tier & Coupon Efficiency

Two business dashboards answer the key exec questions: Do Gold subscribers order more and churn less? (Yes — dramatically.) Does sending coupons reduce churn? (Not in the way you’d hope — coupon users churn at 69.7% vs 1.1% for non-coupon users, revealing a classic discount-dependency trap that a naive coupon engine would make worse.)

Churn Propensity Model — Random Forest

A scikit-learn Pipeline combines SimpleImputer + OrdinalEncoder for categoricals (preserving tier and satisfaction rank order) with SimpleImputer for numerics. A RandomForestClassifier with max_depth=4 and min_samples_leaf=10 avoids overfitting on the 350-row dataset. Model performance is evaluated via 5-fold StratifiedKFold cross-validation to preserve the churn rate in every fold.

Coupon Engine — Next Best Action

Cross-validated churn probabilities feed a business-rules engine that combines churn risk with coupon sensitivity and subscription tier to prescribe one of four actions: aggressive win-back coupon, subscription upsell (for coupon-averse high-risk customers), light loyalty nudge, or “do not send” (the most important output — saving margin on customers who will order regardless).

The Code

Feature engineering — AOV, Churn target, and RFM segmentation Python
# Remap columns to food-delivery domain language
df = df.rename(columns={
    "Items Purchased":          "order_frequency",
    "Total Spend":              "lifetime_gmv",
    "Membership Type":          "subscription_tier",
    "Discount Applied":         "coupon_used",
    "Days Since Last Purchase": "recency_days",
    "Satisfaction Level":       "satisfaction",
})

# AOV (Ticket Médio) — primary marketplace health metric.
# If AOV drops, customers are ordering smaller / cheaper items → margin warning.
df["aov"] = df["lifetime_gmv"] / df["order_frequency"]

# Churn target: customer not seen in > 30 days is considered churned.
# In production this would also integrate explicit cancellations and
# refund patterns — recency is the strongest leading indicator.
df["churn"] = (df["recency_days"] > 30).astype(int)

# RFM segmentation — tertile scoring (1-3).
# Tertiles are appropriate for ~350 rows; production systems use quintiles (1-5).
def rfm_score(series, ascending=True):
    labels = [1, 2, 3] if ascending else [3, 2, 1]
    return pd.qcut(series, q=3, labels=labels, duplicates="drop").astype(int)

df["R"] = rfm_score(df["recency_days"],    ascending=False)  # lower days = better
df["F"] = rfm_score(df["order_frequency"], ascending=True)
df["M"] = rfm_score(df["lifetime_gmv"],    ascending=True)
df["rfm_score"] = df["R"] + df["F"] + df["M"]

def rfm_segment(row):
    if row["rfm_score"] >= 8:   return "Champions"
    elif row["rfm_score"] >= 6: return "Loyal"
    elif row["rfm_score"] >= 4: return "At Risk"
    else:                        return "Churned"

df["rfm_segment"] = df.apply(rfm_segment, axis=1)
Preprocessing — OrdinalEncoder inside a leakage-free Pipeline Python
from sklearn.preprocessing import OrdinalEncoder
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline

cat_features = ["Gender", "City", "subscription_tier", "satisfaction"]
num_features  = [c for c in FEATURES if c not in cat_features]

# OrdinalEncoder instead of OHE — we use ordinal encoding here because
# both subscription_tier and satisfaction have a natural rank order
# (Bronze < Silver < Gold) and (Unsatisfied < Neutral < Satisfied).
# Passing that ordering explicitly lets the tree-based model exploit rank
# relationships in a single column, rather than requiring separate splits
# across multiple binary indicator columns.
cat_pipeline = Pipeline([
    ("impute", SimpleImputer(strategy="most_frequent")),  # 2 missing satisfaction rows
    ("encode", OrdinalEncoder(
        categories=[
            ["Female", "Male"],
            sorted(df["City"].unique().tolist()),
            ["Bronze", "Silver", "Gold"],
            ["Unsatisfied", "Neutral", "Satisfied"],
        ],
        handle_unknown="use_encoded_value",
        unknown_value=-1,
    )),
])
num_pipeline  = Pipeline([("impute", SimpleImputer(strategy="median"))])
preprocessor  = ColumnTransformer([
    ("cat", cat_pipeline, cat_features),
    ("num", num_pipeline, num_features),
])
Churn model — constrained Random Forest with Stratified K-Fold CV Python
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import StratifiedKFold, cross_val_predict
from sklearn.metrics import roc_auc_score, classification_report

# Why NOT include recency_days as a feature?
# churn IS (recency_days > 30). Including it gives ~100% accuracy but the model
# has learned nothing useful — it just thresholds the same column used to define
# churn. Excluding recency forces the model to learn from BEHAVIOURAL signals:
# spend patterns, subscription tier, satisfaction, coupon history.
# That is what a real production churn model does.

clf = RandomForestClassifier(
    n_estimators     = 200,
    max_depth        = 4,    # constrained — prevents memorising training split
    min_samples_leaf = 10,   # every leaf must cover ≥ 10 customers
    class_weight     = "balanced",   # handles 65%/35% class imbalance
    random_state     = 42,
)
full_pipeline = Pipeline([("prep", preprocessor), ("model", clf)])

# 5-fold Stratified CV preserves the 35.4% churn ratio in every fold
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
y_proba_cv = cross_val_predict(
    full_pipeline, X, y, cv=cv, method="predict_proba"
)[:, 1]

# Results:
#               precision  recall  f1-score  support
# Active           0.97    1.00      0.98      226
# Churned          1.00    0.94      0.97      124
# accuracy                           0.98      350
# ROC-AUC: 0.9924
Coupon engine — Next Best Action from churn probabilities Python
df["churn_proba"] = cross_val_predict(
    full_pipeline, X, y, cv=cv, method="predict_proba"
)[:, 1]

def next_best_action(row):
    """
    Prescribe the optimal retention action for each customer.

    Decision logic (business rules informed by the model):
    ──────────────────────────────────────────────────────
    HIGH RISK  (p ≥ 0.70) + coupon sensitive  → Aggressive coupon (win-back)
    HIGH RISK  (p ≥ 0.70) + not coupon user   → Subscription upsell
    MEDIUM RISK (0.40 ≤ p < 0.70) + Bronze    → Tier upsell (structural fix)
    MEDIUM RISK (0.40 ≤ p < 0.70) + Gold/Silv → Light coupon (loyalty nudge)
    LOW RISK   (p < 0.40)                       → DO NOT SEND  ← save margin

    The DO_NOT_SEND branch is as important as any win-back action.
    Sending a coupon to a customer with p = 0.02 is pure margin dilution —
    they were going to order regardless. The engine protects 234 customers
    (66.9% of the base) from unnecessary discount spend.
    """
    p, tier, used = row["churn_proba"], row["subscription_tier"], row["coupon_used"]
    if p >= 0.70:
        return "SEND_AGGRESSIVE_COUPON" if used else "UPSELL_TO_SUBSCRIPTION"
    elif p >= 0.40:
        return "UPSELL_SUBSCRIPTION" if tier == "Bronze" else "SEND_LIGHT_COUPON"
    else:
        return "DO_NOT_SEND"

df["nba_action"] = df.apply(next_best_action, axis=1)

# Results:
#   DO_NOT_SEND               234 customers   avg churn prob: 0.059
#   SEND_AGGRESSIVE_COUPON    116 customers   avg churn prob: 0.993

Visualizations

Three presentation-ready multi-panel charts walk through the business findings — from subscription tier performance and coupon efficiency through churn model diagnostics. All figures share the portfolio’s dark visual language and are annotated for executive readability.

Subscription Tier Performance Dashboard. Gold subscribers generate an AOV of R$ 74.8 vs. R$ 56.2 for Bronze — a 33% ticket premium. More strikingly, their churn rate is 5.1% vs. 51.7% for Bronze. This is the business case for investing in the subscription programme: Gold members don’t just spend more per order — they stay. Silver sits at an unexpected 49.6% churn, suggesting the mid-tier offers insufficient perceived value to lock in behaviour.
Coupon Efficiency Analysis. The most counterintuitive finding: customers who used coupons churn at 69.7%, while non-coupon customers churn at just 1.1%. This is not evidence that coupons cause churn — it is a classic causality trap. Churning customers are likely receiving coupons precisely because they are at risk. The coupon is a reactive patch, not a cure. The satisfaction panel confirms the same divide: coupon users skew heavily Unsatisfied, while non-coupon users are overwhelmingly Satisfied. The AOV panel shows coupon users also generate lower ticket médio (R$ 63.3 vs. R$ 67.2), eroding margin from both ends.
Churn Propensity Model — Random Forest. satisfaction is the single strongest churn driver (0.339 importance), followed by coupon_used (0.225) and lifetime_gmv (0.144). Subscription tier, despite being the starkest descriptive KPI, ranks lower because it is partially collinear with spend and satisfaction inside the tree’s information gain calculation. The ROC curve achieves AUC 0.992 on 5-fold out-of-fold predictions — reflecting the dataset’s clean synthetic structure where customer archetypes are well-separated.

Conclusion

The most important lesson from the coupon efficiency analysis is that correlation is not causation — and acting on correlation is expensive. Coupon users churning at 69.7% does not mean coupons drive churn. It means the platform’s current coupon strategy is a reactive fire-fighting tool: discounts are already being sent to at-risk customers, not to healthy ones. A prescriptive engine that scores churn probability before the coupon decision is made inverts this logic — proactive instead of reactive.

The second lesson is about feature leakage and the recency exclusion. If recency_days were included as a model feature, the classifier would achieve near-perfect accuracy trivially, because churn is defined directly from that column. Excluding it forces the model to learn from behavioural signals — satisfaction, spend trajectory, coupon sensitivity — which are the signals available in real time, before the 30-day window closes. That exclusion is the difference between a demo and a deployable model.

Finally, the DO_NOT_SEND branch of the coupon engine is its most valuable output. Protecting 234 out of 350 customers from unnecessary discount spend — 66.9% of the base — is where margin is recovered. The win-back campaigns get the headline; the saved margin funds them.