# Copy a page

A utility script that clones an existing page (HTML, metadata, files)
to a new slug. Useful for:

- "Make me another listing like this one but for a different property"
- Duplicating yesterday's daily menu as a starting point for tomorrow
- Creating template pages and then making variants
- Forking an event page for the next year's edition

## How it works

1. Read the source page's metadata via `GET /projects/<slug>/pages/<source-slug>`
2. Read the public HTML via `GET https://<slug>.uat-beam.page/<source-slug>`
3. Read each file's contents via `GET https://<slug>.uat-beam.page/<source-slug>/<filename>`
4. Create the new page via `POST /projects/<slug>/pages`
5. Set its metadata via `PUT /projects/<slug>/pages/<new-slug>/metadata`
6. Upload each file via `PUT /projects/<slug>/pages/<new-slug>/files/<filename>`

The public URL is the easy way to read file contents. You can also use
`GET /projects/<slug>/pages/<slug>/files/<filename>` via the API — it
returns `{content, contentType}` for text files or `{base64, contentType}`
for binary files.

## The script

```python
import requests
import time

API = "https://api.uat-beam.page"
TOKEN = "<your-token>"
H = {"Authorization": f"Bearer {TOKEN}", "Content-Type": "application/json"}


def copy_page(project_slug, source_page_slug, new_page_slug, new_notes=None):
    """
    Clone source_page_slug -> new_page_slug within the same project.
    Copies metadata, all files, and notes (override with new_notes).
    """
    base_url = f"https://{project_slug}.uat-beam.page"
    api_root = f"{API}/projects/{project_slug}"

    # 1. Read the source page details
    src = requests.get(f"{api_root}/pages/{source_page_slug}", headers=H).json()
    metadata = src["metadata"]
    files = [f["filename"] for f in src["files"]]
    notes = new_notes if new_notes is not None else src.get("notes", "")

    # 2. Create the new page (this also creates a default index.html
    #    we'll immediately overwrite below)
    requests.post(f"{api_root}/pages", headers=H, json={
        "slug": new_page_slug,
        "notes": notes,
    })
    time.sleep(1)

    # 3. Set the metadata
    requests.put(
        f"{api_root}/pages/{new_page_slug}/metadata",
        headers=H,
        json=metadata,
    )

    # 4. Copy each file. Read from the public URL, upload to the API.
    for filename in files:
        # Build the source URL
        if source_page_slug == "/":
            file_url = f"{base_url}/{filename}"
        else:
            file_url = f"{base_url}/{source_page_slug}/{filename}"

        # The browser would serve index.html at the folder URL, but
        # we need it at the explicit /index.html for fetching
        if filename == "index.html":
            if source_page_slug == "/":
                file_url = base_url + "/"
            else:
                file_url = f"{base_url}/{source_page_slug}"

        resp = requests.get(file_url)
        if not resp.ok:
            print(f"  Skipped {filename} (HTTP {resp.status_code})")
            continue

        # Detect text vs binary by content type
        content_type = resp.headers.get("content-type", "")
        is_text = (
            content_type.startswith("text/")
            or "javascript" in content_type
            or "json" in content_type
            or "xml" in content_type
            or "svg" in content_type
        )

        if is_text:
            body = {"content": resp.text}
        else:
            # Use URL upload — the file is already at a public URL,
            # no need to download and re-encode as base64
            body = {"url": file_url}

        requests.put(
            f"{api_root}/pages/{new_page_slug}/files/{filename}",
            headers=H,
            json=body,
        )
        print(f"  Copied {filename}")
        time.sleep(0.2)

    print(f"Done. New page at {base_url}/{new_page_slug}")


# Example: clone a property listing as the starting point for a new one
copy_page(
    project_slug="smith-estate-agency",
    source_page_slug="123-oak-street",
    new_page_slug="234-elm-street",
    new_notes="3 bed semi (template from Oak Street)",
)
```

After cloning, you'll usually want to update the metadata on the new
page to reflect the actual differences:

```python
requests.put(
    f"{API}/projects/smith-estate-agency/pages/234-elm-street/metadata",
    headers=H,
    json={
        "address": "234 Elm Street, Springfield",
        "price": 265000,
        "bedrooms": 3,
        "bathrooms": 2,
        "sqm": 115,
        "features": ["Bay windows", "Private driveway"],
        "status": "active",
    },
)
```

## Variations

- **Copy across projects:** add a `target_project_slug` parameter and
  use it for the create + upload calls. The source URL stays the same.
- **Bulk clone:** call `copy_page` in a loop with a list of new slugs
  and metadata overrides — useful for spinning up a year of monthly
  newsletter pages from a single template, for example.
- **Sync HTML, fresh metadata:** if you've improved the HTML on the
  source page and want every other page to inherit the new design,
  loop over `pages` and call `copy_page` reading their existing
  metadata back into the new copy. (Or just upload the same HTML to
  every page directly.)

## Why this isn't an API endpoint

There's no native "duplicate page" call — every operation that this
script does is already in the API as a primitive (create page, set
metadata, upload file). The script is just a thin convenience wrapper.
If you need duplication often, paste this script into your code
execution environment and call `copy_page()` whenever you need to.
