Exporting projects, notebooks, and environments#

Anaconda is deprecating the Projects, Notebooks, and Environments features on August 20, 2024 in order to streamline the feature set of Anaconda.org. This means that all projects, notebooks, and environments uploaded to Anaconda.org will be inaccessible after that time. For this reason, Anaconda recommends exporting your projects, notebooks, and environment files prior to the deprecation date, either manually or with the scripts provided.

Exporting Projects#

If you don’t have many projects to export or do not want to use the script export option, you can manually download your project files using the anaconda.org user interface.

  1. Open your Projects page at anaconda.org/<USERNAME>/projects, where <USERNAME> is your username.

  2. Click Download project archive for each project you want to export.

You will find a .tar.bz2 file for each downloaded project in your Downloads folder.

Caution

Use at your own risk. This script has only been tested on MacOS and may have unintended consequenses on other operating systems.

If you have many projects to download, use the following script to download them all at once automatically.

  1. Open a terminal (Anaconda Prompt on Windows).

  2. Create an environment containing BeautifulSoup and requests using the following command:

    conda create -n export-projects beautifulsoup4 requests -y
    
  3. Then activate the environment using the following command:

    conda activate export-projects
    
  4. Create a Python file called export-projects.py.

  5. Paste the following script into the file:

    try:
        from bs4 import BeautifulSoup as bs
        import requests
    except ImportError:
        print("Did not find BeautifulSoup or requests.")
        print(
            "To install dependencies, please run:\n conda create -n export-projects beautifulsoup4 requests -y\n"
        )
        print(
            "Then run:\n  conda activate export-projects\nand:\n  python export-projects.py"
        )
        exit()
    from pathlib import Path
    import argparse
    
    parser = argparse.ArgumentParser()
    parser.add_argument("--username", help="Username for anaconda.org")
    args = parser.parse_args()
    
    username = args.username
    
    domain = "https://anaconda.org"
    if not username:
        username = input("Input anaconda.org username: ")
    url = f"{domain}/{username}/projects"
    
    response = requests.get(url)
    
    projects = []
    if response.status_code == 200:
        soup = bs(response.content, "html.parser")
        section_class = "small-block-grid-1 medium-block-grid-2"
        ul = soup.find("ul", class_=section_class)
        if ul:
            items = ul.find_all("li")
            for item in items:
                proj_href = item.find("a").get("href")
                projects.append(proj_href)
    else:
        print("Error")
    
    print("\nCreating anaconda-project-downloads folder\n")
    directory_path = Path("./anaconda-project-downloads")
    try:
        directory_path.mkdir(parents=True, exist_ok=True)
    except Exception as e:
        print(f"An error has occurred: {e}")
    
    for project in projects:
        # create URL by cat'ing domain + project + /download
        url = domain + project + "/download"
        print(f"Downloading {url}")
        # request and download it
        response = requests.get(url)
    
        project_name = project.split("/")[3]
        # write to disk
        with open(f"./anaconda-project-downloads/{project_name}.tar.bz2", "wb") as f:
            f.write(response.content)
            print(f"Saved anaconda-project-downloads/{project_name}.tar.bz2")
    
    print(
        "\nYou can run `tar -xvjf anaconda-project-downloads/PROJECT_NAME.tar.bz2` to extract the project"
    )
    
  6. Run the script with the following command:

    # Replace <PATH-TO-FILE> with the file path to your Python file
    # Replace <USERNAME> with your Anaconda.org username
    python <PATH-TO-FILE>/export-projects.py --username <USERNAME>
    
  7. The script downloads all projects under the provided username into an anaconda-project-downloads folder in your currently active directory.

Extracting exported project files#

Your exported project files are downloaded as .tar.bz2 files. To extract your files from the archive, open a terminal or command line application and run the following command:

# Replace <.TAR.BZ2-FILE> with the file path and name of your project file
tar -xvjf <.TAR.BZ2-FILE>.tar.bz2

Exporting Notebooks#

If you do not have many notebooks to export or do not want to use the script export option, you can manually download your notebook files using the anaconda.org user interface.

  1. Open your Notebooks page at anaconda.org/<USERNAME>/notebooks, where <USERNAME> is your username.

  2. Select the notebook you want to export.

  3. Click Download.

For information on downloading your notebooks via the command line, see Downloading your notebook.

Caution

Use at your own risk. This script has only been tested on MacOS and may have unintended consequenses on other operating systems.

If you have many notebooks to download, use the following script to download them all at once automatically.

  1. Open a terminal (Anaconda Prompt on Windows).

  2. Create an environment containing BeautifulSoup and requests using the following command:

    conda create -n export-notebooks beautifulsoup4 requests -y
    
  3. Then activate the environment using the following command:

    conda activate export-notebooks
    
  4. Create a Python file called export-notebooks.py.

  5. Paste the following script into the file:

    try:
        from bs4 import BeautifulSoup as bs
        import requests
    except ImportError:
        print("Did not find BeautifulSoup or requests.")
        print(
            "To install dependencies, please run:\n conda create -n export-notebooks beautifulsoup4 requests -y\n"
        )
        print(
            "Then run:\n  conda activate export-notebooks\nand:\n  python export-notebooks.py"
        )
        exit()
    from pathlib import Path
    import argparse
    
    parser = argparse.ArgumentParser()
    parser.add_argument("--username", help="Username for anaconda.org")
    args = parser.parse_args()
    
    username = args.username
    
    domain = "https://anaconda.org"
    # username = ""
    if not username:
        username = input("Input anaconda.org username: ")
    page1 = f"/{username}/notebooks"
    url = f"{domain}/{page1}"
    
    response = requests.get(url)
    
    
    def next_arrow_ignore_unavailable(tag):
        classes = []
        if tag.name == "li":
            classes = tag.get("class", [])
        if "Next" in tag.text:
            return "arrow" in classes and "unavailable" not in classes
        else:
            return False
    
    
    def process_page(soup, notebooks=[]):
        section_class = "small-block-grid-1 medium-block-grid-2"
        ul = soup.find("ul", class_=section_class)
        if ul:
            items = ul.find_all("li")
            for item in items:
                proj_href = item.find("a").get("href")
                notebooks.append(proj_href)
        return notebooks
    
    
    # def check_for_pages(soup): -> soup, more_page
    
    pages = []
    notebooks = []
    if response.status_code == 200:
        soup = bs(response.content, "html.parser")
        more_pages = True
        while more_pages:
            # process current page
            notebooks = process_page(soup, notebooks)
    
            # check for more pages
            li_tags = soup.find_all(next_arrow_ignore_unavailable)
            if li_tags:
                a_tags = [li.find("a") for li in li_tags if li.find("a")]
                for a_tag in a_tags:
                    href = a_tag.get("href")
                    pages.append(href)
                if a_tags:
                    url = domain + href
                    response = requests.get(url)
                    soup = bs(response.content, "html.parser")
                else:
                    more_pages = False
            else:
                more_pages = False
    
    else:
        print("Error")
    
    name_of_folder = "anaconda-notebook-downloads"
    print(f"\nCreating {name_of_folder} folder\n")
    directory_path = Path(f"./{name_of_folder}")
    try:
        directory_path.mkdir(parents=True, exist_ok=True)
    except Exception as e:
        print(f"An error has occurred: {e}")
    
    for notebook in notebooks:
        # create URL by cat'ing domain + notebook + /download
        domain = "https://notebooks.anaconda.org"
        url = domain + notebook + "/download?version="
        print(f"Downloading {url}")
        # request and download it
        response = requests.get(url)
    
        notebook_name = notebook.split("/")[2]
        # write to disk
        with open(f"./{name_of_folder}/{notebook_name}.ipynb", "wb") as f:
            f.write(response.content)
            print(f"Saved {directory_path.cwd()}/{name_of_folder}/{notebook_name}.ipynb")
    
    print(f"Notebooks saved to: {directory_path}")
    
  6. Run the script with the following command:

    # Replace <PATH-TO-FILE> with the file path to your Python file
    # Replace <USERNAME> with your Anaconda.org username
    python <PATH-TO-FILE>/export-notebooks.py --username <USERNAME>
    
  7. The script downloads all notebooks under the provided username into an anaconda-notebook-downloads folder in your currently active directory.

Uploading your notebooks to Anaconda Notebooks#

Anaconda Notebooks provides secure file storage, cloud-based editing, and sharing capabilities for any notebooks you want to work with and store in the cloud.

Note

Uploading files to Anaconda Notebooks requires an Anaconda Cloud account. If you don’t have an Anaconda Cloud account, register a free account now.

  1. Log in to anaconda.cloud.

  2. Click Notebooks.

  3. In Anaconda Notebooks, click Upload Files and select the .ipynb files you want to upload.

You can also share your notebooks with other Anaconda Cloud users by clicking Share in an open notebook. For more information on sharing notebooks, see Sharing Anaconda Notebooks.

For more information about Anaconda Notebooks, see the Anaconda Notebooks FAQ.

Exporting Environments#

See Downloading your environment for information on manually downloading environments via the anaconda.org interface or the command line.

Caution

Use at your own risk. This script has only been tested on MacOS and may have unintended consequenses on other operating systems.

If you have many environments to download, use the following script to download them all at once automatically. This script downloads all versions of all environments.

  1. Open a terminal (Anaconda Prompt on Windows).

  2. Create an environment containing BeautifulSoup and requests using the following command:

    conda create -n export-environments beautifulsoup4 requests -y
    
  3. Then activate the environment using the following command:

    conda activate export-environments
    
  4. Create a Python file called export-environments.py.

  5. Paste the following script into the file:

    try:
        from bs4 import BeautifulSoup as bs
        import requests
    except ImportError:
        print("Did not find BeautifulSoup or requests.")
        print(
            "To install dependencies, please run:\n conda create -n export-environments beautifulsoup4 requests -y\n"
        )
        print(
            "Then run:\n  conda activate export-environments\nand:\n  python export-environments.py"
        )
        exit()
    from pathlib import Path
    import argparse
    
    parser = argparse.ArgumentParser()
    parser.add_argument("--username", help="Username for anaconda.org")
    args = parser.parse_args()
    
    username = args.username
    
    domain = "https://anaconda.org"
    if not username:
        username = input("Input anaconda.org username: ")
    page1 = f"/{username}/environments"
    url = f"{domain}/{page1}"
    
    response = requests.get(url)
    
    
    def next_arrow_ignore_unavailable(tag):
        classes = []
        if tag.name == "li":
            classes = tag.get("class", [])
        if "Next" in tag.text:
            return "arrow" in classes and "unavailable" not in classes
        else:
            return False
    
    
    def process_page(soup, environments=[]):
        section_class = "small-block-grid-1 medium-block-grid-2"
        ul = soup.find("ul", class_=section_class)
        if ul:
            items = ul.find_all("li")
            for item in items:
                proj_href = item.find("a").get("href")
                environments.append(proj_href)
        return environments
    
    
    pages = []
    environments = []
    if response.status_code == 200:
        soup = bs(response.content, "html.parser")
        more_pages = True
        while more_pages:
            # process current page
            environments = process_page(soup, environments)
    
            # check for more pages
            li_tags = soup.find_all(next_arrow_ignore_unavailable)
            if li_tags:
                a_tags = [li.find("a") for li in li_tags if li.find("a")]
                for a_tag in a_tags:
                    href = a_tag.get("href")
                    pages.append(href)
                if a_tags:
                    url = domain + href
                    response = requests.get(url)
                    soup = bs(response.content, "html.parser")
                else:
                    more_pages = False
            else:
                more_pages = False
    else:
        print("Error, could not reach page")
        exit()
    
    name_of_folder = "anaconda-environment-downloads"
    print(f"\nCreating {name_of_folder} folder\n")
    directory_path = Path(f"./{name_of_folder}")
    try:
        directory_path.mkdir(parents=True, exist_ok=True)
    except Exception as e:
        print(f"An error has occurred: {e}")
    
    for env in environments:
        versions = []
        # create URL by cat'ing domain + env + /download
        url = domain + env + "/files"
        print(f"Navigated to: {url}")
        response = requests.get(url)
        soup = bs(response.content, "html.parser")
        a_tags = soup.find_all("a")
        for a_tag in a_tags:
            href = a_tag.get("href")
            if href:
                if "download" in href and href.endswith(".yml"):
                    versions.append(href)
    
        env_name = env.split("/")[2]
        # write to disk
        for version in versions:
            version_file_name = f"{name_of_folder}{version}"
            version_path = Path(f"./{version_file_name}")
            try:
                version_path.parent.mkdir(parents=True, exist_ok=True)
            except Exception as e:
                print(e)
            download_url = domain + version
            download_response = requests.get(download_url)
            with open(f"./{version_file_name}", "wb") as f:
                f.write(download_response.content)
                print(f"Saved {directory_path.cwd()}/{version_file_name}")
    
    print(f"environments saved to: {directory_path}")
    
  6. Run the script with the following command:

    # Replace <PATH-TO-FILE> with the file path to your Python file
    # Replace <USERNAME> with your Anaconda.org username
    python <PATH-TO-FILE>/export-environments.py --username <USERNAME>
    
  7. The script downloads all versions of all environments under the provided username into an anaconda-environment-downloads folder in your currently active directory. Each environment’s download folder is named based on the date and time (in UTC) uploaded to Anaconda.org.