Building a Labubu bot s tep-by-step tutorial

Failing to get your hands on a Labubu box? That is no surprise: limited-edition figures are now a target of fans, collectors, and resellers, and online purchase is often the only way. In those circumstances, it is nearly impossible to buy a figure manually, and a special bot is the answer. This guide discusses all the pitfalls of buying Labubu with a bot and provides a ready solution.

Why is using the Labubu bot non-negotiable?

Labubu, a plush monster, became popular in 2019 after the collaboration between its creator, Kashing Lung, and Pop Mart began. The Chinese retailer is an exclusive manufacturer, offering Labubu primarily in blind boxes. Besides, boxes are collected into theme lines, including limited-edition collabs with other brands, and lines often feature a rare “secret” figure that wasn’t announced. Add the fact that Pop Mart drops don’t happen at a fixed time – instead, fans have to track patterns and constantly be on the lookout. It creates a wild wave, and buying a toy becomes a fight. That’s not solely a figure of speech – Pop Mart stopped selling Labubu boxes in some of its offline stores due to customer incidents that led to physical injuries. While online purchases are way safer, they still require speed and accuracy, and even minor errors like accidentally clicking the wrong link can result in you having no toy. 

Special bot, on the other hand, saves you a lot of trouble:

  • Monitors new arrivals, eliminating the need to check whether the toy is in stock manually; 
  • Automates the process of buying – you only have to complete the purchase;
  • Completes actions in no time – and speed is the most crucial factor in buying Labubu;
  • Eliminates the possibility of human errors like accidentally pressing the wrong button. 

Is it fair to use a bot?

Botting isn’t prohibited; however, you should stick to Pop Mart’s Terms of Service and not implement illegal practices. Also, you should not use bots to gain sensitive data or harm the website. DataImpulse encourages you only to scrape ethically and not bear responsibility for any illegal activities. 

So, let’s get you ready for the next drop.

Building a Labubu bot using Python 

Before rushing into coding, you need to pay attention to key factors that determine the success of your bot: 

  • Designing a logic;
  • Handling CAPTCHA, pop-ups, and location-based limitations;
  • Mimicking real human behaviour – set intervals between requests, etc.;
  • Choosing additional tools;
  • Reviewing and updating a bot as Pop Mart regularly presents new updates, and your bot should be able to handle them. 

We will solve those problems step by step. 

Getting started

  1. Visit the Python official website and download the latest version (or update). While installing, make sure to add Python to your PATH, so you can execute it via the terminal. 
  2. Run the following commands to install the necessary libraries:

pip install playwright httpx python-dotenv

      
        
        

It will install 3 Python packages:

  • Playwright – browser automation tool;
  • httpx – HTTP client for Python; supports HTTP/1.1 & HTTP/2, async requests, proxies, etc;
  • python-dotenv – loads environment variables from a .env file and is suitable for storing sensitive details (like passwords and tokens) outside your code. 
  1. Use this command to download browser binaries:

playwright install

      
        
      

Create a Telegram bot and a channel

We need a Telegram bot to receive notifications when a new release is available. This way, you can stay busy with your life and not miss a drop. 

Here are instructions on how to do it and get an API key. 

Then, you need to create a public channel and add your bot as an admin. 

Developing a bot 

Generally, the structure of the bot is like this:

labububot/

├─ main.py
├─ runner.py
├─ config.py
├─ scraping.py
├─ telegram_utils.py
├─ state.py
├─ labubu_state.json
├─ .env 

Let’s explain what we need each part for.

runner.py orchestrates the bot’s logic. It runs in a continuous loop at a defined interval. This piece of code loads the current state, collects product information, checks availability, sends Telegram notifications, and saves the updates in the labubu_state.json file, which is created automatically. 


import asyncio
from playwright.async_api import async_playwright

from state import load_state, save_state
from scraping import gather_products_once
from telegram_utils import render_message, send_telegram, now_iso
from config import INTERVAL_SECONDS

async def run():
    state = load_state()
    async with async_playwright() as pw:
        print("[START] Labubu stock watcher started.")
        while True:
            try:
                items = await gather_products_once(pw)
                for it in items:
                    url = it["url"]
                    prev = state.get(url, {}).get("in_stock")
                    state[url] = {"in_stock": it["in_stock"], "name": it["name"], "price": it["price"]}
                    if it["in_stock"] and prev is not True:
                        msg = render_message(it)
                        await send_telegram(msg)
                save_state(state)
                print(f"[OK] Cycle finished: {len(items)} item(s). {now_iso()}")
            except Exception as e:
                print(f"[FATAL] {e}")
            await asyncio.sleep(INTERVAL_SECONDS)

      
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
      

config.py centralizes all constants and dependent settings. It defines which Pop Mart pages to scan, sets where to store data, and downloads sensitive information, such as proxy details and the Telegram bot API key, from an .env file.


import os
from pathlib import Path
from dotenv import load_dotenv

SEARCH_PAGES = [
    "https://www.popmart.com/us/search/LABUBU",
    "https://www.popmart.com/us/new-arrivals",
    "https://www.popmart.com/us/collection/11/the-monsters",
]

load_dotenv()

INTERVAL_SECONDS = int(os.getenv("INTERVAL_SECONDS", "120"))

TELEGRAM_BOT_TOKEN = os.getenv("TELEGRAM_BOT_TOKEN")
TELEGRAM_CHAT_ID = os.getenv("TELEGRAM_CHAT_ID")

PROXY_SERVER = os.getenv("PROXY_SERVER")
PROXY_USERNAME = os.getenv("PROXY_USERNAME")
PROXY_PASSWORD = os.getenv("PROXY_PASSWORD")

HEADLESS = os.getenv("HEADLESS", "1") != "0"

STATE_FILE = Path("labubu_state.json")

      
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
      

.env stores sensitive data. Storing such details in a separate file instead of hardcoding them improves your safety level and makes it easier to change passwords or anything else if necessary. That file also sets in which mode (headless or not) the browser runs and configures the scraping interval. Remember to enter your actual credentials and data. You can copy your DataImpulse credentials from the chosen plan’s tab. 


TELEGRAM_BOT_TOKEN=your_token
TELEGRAM_CHAT_ID=@name

INTERVAL_SECONDS=90

PROXY_SERVER=proxy_server
PROXY_USERNAME=proxy_login
PROXY_PASSWORD=proxy_password

HEADLESS=1

      
        
        
        
        
        
        
        
        
        
        
      

scraping.py is at the heart of scraping. It uses Playwright to monitor product availability on defined Pop Mart pages, extract target information, and parse structured data into a .json file.  


import re
from urllib.parse import urljoin
from playwright.async_api import async_playwright

from config import SEARCH_PAGES, PROXY_SERVER, PROXY_USERNAME, PROXY_PASSWORD, HEADLESS

async def extract_products_from_search(page) -> list[dict]:
    products, seen = [], set()
    anchors = await page.locator('a[href*="/products/"]').all()
    for a in anchors:
        href = await a.get_attribute("href")
        if not href or "/products/" not in href:
            continue
        url = urljoin(page.url, href)
        if url in seen:
            continue
        name = (await a.text_content() or "").strip()
        if not name or len(name) < 3:
            try:
                name = (await a.locator("..").text_content() or "").strip()
            except Exception:
                pass
        if "labubu" in (name or "").lower() or "labubu" in url.lower() or "the-monsters" in url.lower():
            products.append({"name": name, "url": url})
            seen.add(url)
    return products

async def parse_ld_json(page):
    try:
        content = await page.locator('script[type="application/ld+json"]').nth(0).text_content()
        import json as _json
        data = _json.loads(content)
        if isinstance(data, list):
            for item in data:
                if isinstance(item, dict) and "offers" in item:
                    offers = item["offers"]
                    if isinstance(offers, dict):
                        return {
                            "availability": offers.get("availability"),
                            "price": offers.get("price"),
                            "priceCurrency": offers.get("priceCurrency"),
                        }
        elif isinstance(data, dict) and "offers" in data:
            offers = data["offers"]
            return {
                "availability": offers.get("availability"),
                "price": offers.get("price"),
                "priceCurrency": offers.get("priceCurrency"),
            }
    except Exception:
        pass
    return {}

async def check_product_stock(page, url: str) -> dict:
    await page.goto(url, wait_until="domcontentloaded")
    await page.wait_for_timeout(1200)
    title = (await page.title()) or ""
    title = title.replace(" - POP MART", "").strip()
    meta = await parse_ld_json(page)
    availability = (meta.get("availability") or "") if meta else ""
    price = meta.get("price") if meta else None
    if not price:
        try:
            price_text = await page.locator('[class*="price"], .price, [data-test*="price"]').first.text_content()
            if price_text:
                price_clean = re.sub(r"[^\d\.,]", "", price_text)
                price = price_clean if price_clean else None
        except Exception:
            pass
    in_stock = None
    try:
        if await page.locator("text=ADD TO BAG").count():
            in_stock = True
        elif await page.locator("text=SOLD OUT").count():
            in_stock = False
    except Exception:
        pass
    if in_stock is None and availability:
        in_stock = "InStock" in availability or "instock" in availability.lower()
    if in_stock is None:
        try:
            btn_texts = [t.lower() for t in await page.locator("button, div[role=button]").all_text_contents()]
            joined = " | ".join(btn_texts)
            if "add to bag" in joined or "add to cart" in joined:
                in_stock = True
            elif "sold out" in joined or "out of stock" in joined:
                in_stock = False
        except Exception:
            pass
    return {
        "url": url,
        "name": title or "LABUBU",
        "price": price,
        "in_stock": bool(in_stock),
    }

async def gather_products_once(pw) -> list[dict]:
    browser = await pw.chromium.launch(headless=HEADLESS)
    context_args = {}
    if PROXY_SERVER:
        context_args["proxy"] = {"server": PROXY_SERVER, "username":PROXY_USERNAME, "password":PROXY_PASSWORD}
    context = await browser.new_context(**context_args)
    page = await context.new_page()

    pool = {}
    for url in SEARCH_PAGES:
        try:
            await page.goto(url, wait_until="domcontentloaded")
            await page.wait_for_timeout(1000)
            await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
            await page.wait_for_timeout(800)
            prods = await extract_products_from_search(page)
            for p in prods:
                pool[p["url"]] = p
        except Exception as e:
            print(f"[WARN] Failed to open {url}: {e}")

    results = []
    for url in list(pool.keys()):
        try:
            res = await check_product_stock(page, url)
            results.append(res)
        except Exception as e:
            print(f"[WARN] Failed to check {url}: {e}")

    await context.close()
    await browser.close()
    return results

      
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

        
        
        
        
        
        
        
        
        
        
        
        
        
      

telegram_utils.py is responsible for sending Telegram notifications when the target product is available. 


import html
import httpx
from datetime import datetime, timezone
from config import TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID

def now_iso() -> str:
    return datetime.now(timezone.utc).astimezone().isoformat(timespec="seconds")

async def send_telegram(text: str, disable_preview: bool = False) -> None:
    if not TELEGRAM_BOT_TOKEN or not TELEGRAM_CHAT_ID:
        print("[WARN] TELEGRAM_BOT_TOKEN / TELEGRAM_CHAT_ID not set — message not sent.")
        return
    
    api_url = f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendMessage"
    payload = {
        "chat_id": TELEGRAM_CHAT_ID,
        "text": text,
        "parse_mode": "HTML",
        "disable_web_page_preview": disable_preview,
    }
    async with httpx.AsyncClient(timeout=20) as client:
        r = await client.post(api_url, data=payload)
        if r.status_code != 200:
            print(f"[TG ERROR] {r.status_code}: {r.text}")

def render_message(item: dict) -> str:
    title = html.escape(item.get("name") or "Labubu item")
    price = item.get("price")
    price_line = f"\n💵 Price: {price}" if price else ""
    return (
        f"🟢 LABUBU in stock\n"
        f"{title}{price_line}\n"
        f"🔗 {item['url']}\n"
        f"🕒 {now_iso()}\n"
        f"#Labubu #PopMart"
    )

      
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
 
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
      

state.py loads and saves information regarding products’ state in labubu_state.json file. 


import json
from config import STATE_FILE

def load_state() -> dict:
    if STATE_FILE.exists():
        try:
            return json.loads(STATE_FILE.read_text("utf-8"))
        except Exception:
            return {}
    return {}

def save_state(state: dict) -> None:
    STATE_FILE.write_text(
        json.dumps(state, ensure_ascii=False, indent=2),
        encoding="utf-8"
    )

      
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
      

main.py serves as the entry point for the whole application. Like a “Start engine” button starts a car, this code starts your bot. 


import asyncio
from runner import run

if __name__ == "__main__":
    try:
        asyncio.run(run())
    except KeyboardInterrupt:
        print("\n[STOP] Stopped by user.")

      
        
        
        
        
        
        
        
        
      

To run a file (and launch a bot), type this command in a terminal:


python main.py

      
        
      

Additional tools for seamless work 

There are several instruments you can implement:

  • Logging frameworks – track errors and monitor general bot activity. It is helpful for debugging. 
  • Error tracking tools – automatically capture and report errors in real-time, allowing you to address issues quickly.
  • Monitoring and alerting services – control bot performance and alert for unusual activity or downtime. 
  • Environment isolation – containers help create isolated environments, thus simplifying deployment. 

In our code, we also used tools like dotenv to secure sensitive data, which is especially important today, with growing numbers of attacks and proxies. Proxies are essential as they help you mimic real human behaviour by distributing your traffic across different IP addresses. They also allow you to use numerous connection threads simultaneously and increase your chances of getting Labubu. Regarding proxies, DataImpulse is ready to have your back – email us at [email protected] or use the “Try now” button to start with us.

Jennifer R.

Content Editor

Content Manager at DataImpulse. Jennifer's degree in philology and translation and several years of experience in content writing help her create easy-to-understand copies, even on tangled tech topics. While writing every text, her goal is to provide an in-depth look at the given topic and give answers to all possible questions. Subscribe to our newsletter and always be updated on the best technologies for your business.