{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Import to make brainrender embed scenes in jupyter notebooks\n", "from brainrender.scene import Scene\n", "from brainrender import __version__, settings\n", "from rich import print\n", "from vedo import embedWindow\n", "\n", "from myterial import blue_light, salmon\n", "\n", "settings.SHOW_AXES = False\n", "settings.WHOLE_SCREEN = False\n", "\n", "embedWindow(None)\n", "if not __version__=='2.0.2.9':\n", " raise ValueError(f'This executable is meant to work with brainrender 2.0.2.9, not: {__version__}')\n", "\n", "def make_scene(species='mouse'):\n", " if species == 'mouse':\n", " scene = Scene()\n", " elif species == 'zfish':\n", " scene = Scene(atlas_name=\"mpin_zfish_1um\",)\n", " else:\n", " raise ValueError(f'Species not supported: {species}')\n", " scene.root._needs_silhouette = True\n", " scene.root._silhouette_kwargs['lw'] = 1\n", " return scene\n", " \n", "def render_scene(scene, **kwargs):\n", " print(f'[{blue_light}]Rendering scene, press \"[{salmon} b]q[/{salmon} b]\" to close')\n", " scene.render(**kwargs)\n", "\n", " # close when done\n", " scene.plotter.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction\n", "\n", "Understanding how nervous systems generate behavior benefits from gathering multidimensional data from different individual animals. These data range from neural activity recordings and anatomical connectivity, to cellular and subcellular information such as morphology and gene expression profiles. These different types of data should ideally all be in register so that, for example, neural activity in one brain region can be interpreted in light of the connectivity of that region or the cell types it contains. Such registration, however, is challenging. Often it is not technically feasible to obtain multidimensional data in a single experiment, and registration to a common reference frame must be performed post hoc. Even for the same experiment type, registration is necessary to allow comparisons across individual animals [@bib25].\n", "\n", "While different types of references can in principle be used, neuroanatomical location is a natural and most commonly used reference frame [@bib6; @bib21; @bib2; @bib15]. In recent years, several high-resolution three-dimensional (3D) digital brain atlases have been generated for model species commonly used in neuroscience [@bib32; @bib21; @bib2; @bib15]. These atlases provide a framework for registering different types of data across macro- and microscopic scales. A key output of this process is the visualization of all datasets in register. Given the intrinsically 3D geometry of brain structures and individual neurons, 3D renderings are more readily understandable and can provide more information when compared to two dimensional images. Exploring interactive 3D visualizations of the brain gives an overview of the relationship between datasets and brain regions and helps generating intuitive insights about these relationships. This is particularly important for large-scale datasets such as the ones generated by open-science projects like MouseLight [@bib33] and the Allen Mouse Connectome [@bib21]. In addition, high-quality 3D visualizations facilitate the communication of experimental results registered to brain anatomy.\n", "\n", "Generating custom 3D visualizations of atlas data requires programmatic access to the atlas. While some of the recently developed atlases provide an API (Application Programming Interface) for accessing atlas data [@bib32; @bib15], rendering these data in 3D remains a demanding and time-consuming task that requires significant programming skills. Moreover, visualization of user-generated data registered onto the atlas requires an interface between the user data and the atlas data, which further requires advanced programming knowledge and extensive development. There is therefore the need for software that can simplify the process of visualizing 3D anatomical data from available atlases and from new experimental datasets.\n", "\n", "Currently, existing software packages such as cocoframer [@bib16], BrainMesh [@bib34], and SHARPTRACK [@bib24] provide some functionality for 3D rendering of anatomical data. These packages, however, are only compatible with a single atlas and cannot be used to render data from different atlases or different animal species. Achieving this requires adapting the existing software to the different atlases datasets or developing new dedicated software all together, at the cost of significant additional efforts, often duplicated. An important limitation of the currently available software is that it frequently does not support rendering of non-atlas data, such as data from publicly available datasets (e.g. MouseLight) or produced by individual laboratories. This capability is essential for easily mapping newly generated data onto brain anatomy at high resolution and produce visualizations of multidimensional datasets. More advanced software such as natverse [@bib5] offers extensive data visualization and analysis functionality, but currently, it is mostly restricted to data obtained from the _Drosophila_ brain. Simple Neurite Tracer [@bib3], an ImageJ-based software, can render neuronal morphological data from public and user-generated datasets and is compatible with several reference atlases. However, this software does not support visualization of data other than neuronal morphological reconstructions nor can it be easily adapted to work with different or new atlases beyond the ones already supported. Finally, software such as MagellanMapper [@bib35] can be used to visualize and analyze large 3D brain imaging datasets, but the visualization is restricted to one data item (i.e. images from one individual brain). It is therefore not possible to combine data from different sources into a single visualization. Ideally, a rendering software should work with 3D mesh data instead of 3D voxel image data to allow the creation of high-quality renderings and facilitate the integration of data from different sources.\n", "\n", "An additional consideration is that existing software tools for programmatic neuroanatomical renderings have been developed in programming languages such as R and Matlab, and there is currently no available alternative in Python. The popularity of Python within the neuroscientific community has grown tremendously in recent years [@bib19]. Building on Python’s simple syntax and free, high-quality data processing and analysis packages, several open-source tools directly aimed at neuroscientists have been written in Python and are increasingly used (e.g., [@bib18]; [@bib22]; [@bib31]). Developing a python-based software for universal generation of 3D renderings of anatomically registered data can therefore take advantage of the increasing strength and depth of the python neuroscience community for testing and further development.\n", "\n", "For these reasons, we have developed brainrender: an open-source python package for creating high-resolution, interactive 3D renderings of anatomically registered data. Brainrender is written in Python and integrated with BrainGlobe’s AtlasAPI [@bib7] to interface natively with different atlases without need for modification. Brainrender supports the visualization of data acquired with different techniques and at different scales. Data from multiple sources can be combined in a single rendering to produce rich and informative visualizations of multidimensional data. Brainrender can also be used to create high-resolution, publication-ready images and videos (see [@bib31]; [@bib1]), as well as interactive online visualizations to facilitate the dissemination of anatomically registered data. Finally, using brainrender requires minimal programming skills, which should accelerate the adoption of this new software by the research community. All brainrender code is available at the GitHub repository together with extensive online documentation and examples.\n", "\n", "# Results\n", "\n", "## Design principles and implementation\n", "\n", "A core design goal for brainrender was to generate a visualization software compatible with any reference atlas, thus providing a generic and flexible tool ([Figure 1A](#fig1)). To achieve this goal, brainrender has been developed as part of the BrainGlobe’s computational neuroanatomy software suite. In particular, we integrated brainrender directly with BrainGlobe’s AtlasAPI [@bib7]. The AtlasAPI can download and access atlas data from several supported atlases in an unified format. Brainrender uses the AtlasAPI to access 3D mesh data from individual brain regions as well as metadata about the hierarchical organization of the brain’s structures ([Figure 1B](#fig1)). Thus, the same programming interface can be used to access data from any atlas (see code examples in [Figure 2](#fig2)), including recently developed ones (e.g. the enhanced and unified mouse brain atlas, [@bib6]).\n", "\n", "figure: Figure 1.\n", ":::\n", "![](article.ipynb.media/fig1.jpg)\n", "\n", "### Design principles.\n", "\n", "(**A**) Schematic illustration of how different types of data can be loaded into brainrender using either brainrender’s own functions, software packages from the BrainGlobe suite, or custom Python scripts. All data loaded into brainrender is converted to a unified format, which simplifies the process of visualizing data from different sources. (**B**) Using brainrender with different atlases. Visualization of brain atlas data from three different atlases using brainrender. Left, Allen atlas of the mouse brain showing the superficial (SCs) and motor (SCm) subdivisions of the superior colliculus and the Zona Incerta (data from [@bib32]). Middle, visualization of the cerebellum and tectum in the larval zebrafish brain (data from [@bib15]). Right, visualization of the precentral gyrus, postcentral gyrus, and temporal lobe of the human brain (data from [@bib8]). (**C**) The brainrender GUI. Mouse, human, and zebrafish larvae drawings from [scidraw.io](https://scidraw.io/) ([doi.org/10.5281/zenodo.3925991](http://doi.org/10.5281/zenodo.3925991), [doi.org/10.5281/zenodo.3926189](http://doi.org/10.5281/zenodo.3926189), [doi.org/10.5281/zenodo.3926123](http://doi.org/10.5281/zenodo.3926123)).\n", ":::\n", "{#fig1}\n", "\n", "\n", "figure: Figure 2.\n", ":::\n", "![](article.ipynb.media/fig2.jpg)\n", "\n", "### Code examples.\n", "\n", "Example python code for visualizing brain regions in the mouse and larval zebrafish brains. The same commands can be used for both atlases and switching between atlases can be done by simply specifying which atlas to use when creating the visualization. Further examples can be found in brainrender’s GitHub repository.\n", ":::\n", "{#fig2}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "The second major design principle was to enable rendering of any data type that can be registered to a reference atlas, either from publicly available datasets or from individual laboratories. Brainrender can directly visualize data produced with any analysis software from the BrainGlobe suite, including cellfinder [@bib30] and brainreg [@bib31]. In addition, brainrender provides functionality for easily loading and visualizing commonly used data types such as .npy files with cell coordinates or image data, .obj, and .stl files with 3D mesh data and .json files with streamlines data for mesoscale connectomics. Additional information about the file formats accepted by brainrender can be found in the online documentation. Brainglobe’s software suite also includes imio which can load data from several file types (e.g. tiff and .nii), and additional file formats can be loaded through the numerous packages provided by the python ecosystem. Finally, the existing loading functionality can be easily expanded to support user-specific needs by directly plugging in custom user code into the brainrender interface ([Figure 1A](#fig1)).\n", "\n", "One of the goals of brainrender is to facilitate the creation of high-resolution images, animated videos, and interactive online visualizations from any anatomically registered data. Brainrender uses vedo as the rendering engine [@bib20], a state-of-the-art tool that enables fast, high-quality rendering with minimal hardware requirements.\n", "\n", "High-resolution renderings of rich 3D scenes can be produced rapidly (e.g. 10,000 cells in less than 2 s) in standard laptop or desktop configurations. Benchmarking tests across different operating systems and machine configurations show that using a GPU can increase the framerate of interactive renderings by a factor of 3.5 (see [Tables 1](#table1) and [2](#table2) in Materials and methods). This performance increase, however, depends on the complexity of the pre-processing steps, such as data loading and mesh generation, which run on the CPU. As one the main goals of brainrender is to produce high-resolution visualizations, we have made the rendering quality independent of hardware configuration, which only affects the rendering time. Animated videos and online visualizations can be produced with a few lines of code in brainrender. Several options are provided for easily customizing the appearance of rendered objects, thus enabling high-quality, rich data visualizations that combine multiple data sources.\n", "\n", "table: Table 1.\n", ":::\n", "### Machine configurations used for benchmark tests.\n", "\n", "| N | OS | CPU | GPU |\n", "| - | ------------------------- | --------------------------------------- | -------------------------- |\n", "| 1 | Macos Mojave 10.14.6 | 2.3 ghz Intel Core i9 | Radeon Pro 560 × 4 GB GPU |\n", "| 2 | Ubuntu 18.04.2 LTS x86 64 | Intel i7-8565U (x) @ 4.5 ghz | NO GPU |\n", "| 3 | Windows 10 | Intel(R) Core i7-7700HQ 2.8 ghz | NO GPU |\n", "| 4 | Windows 10 | Intel(R) Xeon(R) CPU E5-2643 v3 3.4 ghz | NVIDIA geforce GTX 1080 Ti |\n", ":::\n", "{#table1}\n", "\n", "table: Table 2.\n", ":::\n", "### Benchmark tests results.\n", "\n", "The number of actors refers to the total number of elements rendered, and the number of vertices refers to the total number of mesh vertices in the rendering.\n", "\n", "| Test | Machine | GPU | # actors | # vertices | FPS | Run duration |\n", "| ------------------- | ------- | --- | -------- | ---------- | ------ | ------------ |\n", "| 10 k cells | 1 | Yes | 3 | 1,029,324 | 24.76 | 0.81 |\n", "| | 2 | No | 3 | 1,029,324 | 22.46 | 1.16 |\n", "| | 3 | No | 3 | 1,029,324 | 20.00 | 1.41 |\n", "| | 4 | Yes | 3 | 1,029,324 | 100.00 | 1.34 |\n", "| 100 k cells | 1 | Yes | 3 | 9,849,324 | 18.87 | 3.23 |\n", "| | 2 | No | 3 | 9,849,324 | 14.91 | 4.34 |\n", "| | 3 | No | 3 | 9,849,324 | 0.43 | 7.94 |\n", "| | 4 | Yes | 3 | 9,849,324 | 1.20 | 1.13 |\n", "| 1 M cells | 1 | Yes | 3 | 98,049,324 | 2.65 | 31.01 |\n", "| | 2 | No | 3 | 98,049,324 | 2.55 | 96.49 |\n", "| | 3 | No | 3 | 98,049,324 | 0.03 | 86.75 |\n", "| | 4 | Yes | 3 | 9,8049,324 | 0.13 | 36.57 |\n", "| Slicing 10 k cells | 1 | Yes | 3 | 237,751 | 37.64 | 0.96 |\n", "| | 2 | No | 3 | 237,751 | 39.10 | 1.25 |\n", "| | 3 | No | 3 | 237,751 | 26.32 | 1.88 |\n", "| | 4 | Yes | 3 | 237,751 | 200.00 | 1.34 |\n", "| Slicing 100 k cells | 1 | Yes | 3 | 276,092 | 31.79 | 7.77 |\n", "| | 2 | No | 3 | 276,092 | 25.98 | 9.09 |\n", "| | 3 | No | 3 | 276,092 | 21.28 | 16.88 |\n", "| | 4 | Yes | 3 | 276,092 | 111.11 | 9.65 |\n", "| Slicing 1 M cells | 1 | Yes | 3 | 275,069 | 11.23 | 91.31 |\n", "| | 2 | No | 3 | 275,069 | 5.39 | 104.79 |\n", "| | 3 | No | 3 | 275,069 | 5.03 | 158.99 |\n", "| | 4 | Yes | 3 | 275,069 | 37.04 | 97.43 |\n", "| Brain regions | 1 | Yes | 1678 | 1,864,388 | 9.38 | 11.78 |\n", "| | 2 | No | 1678 | 1,864,388 | 7.61 | 27.40 |\n", "| | 3 | No | 1678 | 1,864,388 | 6.49 | 46.79 |\n", "| | 4 | Yes | 1678 | 1,864,388 | 11.90 | 35.83 |\n", "| Animation | 1 | Yes | 8 | 96,615 | 9.91 | 18.98 |\n", "| | 2 | No | 8 | 96,615 | 22.12 | 12.63 |\n", "| | 3 | No | 8 | 96,615 | 15.15 | 11.92 |\n", "| | 4 | Yes | 8 | 96,615 | 47.62 | 12.29 |\n", "| Volume | 1 | Yes | 12 | 49,324 | 1.79 | 2.31 |\n", "| | 2 | No | 12 | 49,324 | 1.66 | 1.95 |\n", "| | 3 | No | 12 | 49,324 | 3.55 | 2.15 |\n", "| | 4 | Yes | 12 | 49,324 | 23.26 | 1.21 |\n", ":::\n", "{#table2}\n", "\n", "Finally, we aimed for brainrender to empower scientists with little or no programming experience to generate advanced visualizations of their anatomically registered data. To make brainrender as user-friendly as possible we have produced extensive documentation, tutorials and examples for installing and using the software. We have also developed a graphic user interface (GUI) to access most of brainrender’s core functionality. This GUI can be used to perform actions such as rendering of brain regions and labeled cells (e.g. from cellfinder) and creating images of the rendered data, without writing custom python code ([Figure 1C](#fig1)), ([Video 1](#video1)).\n", "\n", "\n", "\n", "## Visualizing brain regions and other structures\n", "\n", "A key element of any neuroanatomical visualization is the rendering of the entire outline of the brain as well as the borders of brain regions of interest. In brainrender, this can easily be achieved by specifying which brain regions to include in the rendering. The software will then use BrainGlobe’s AtlasAPI to load the 3D data and subsequently renders them ([Figure 1B](#fig1)).\n", "\n", "Brainrender can also render brain areas defined by factors other than anatomical location, such as gene expression levels or functional properties. These can be loaded either directly as 3D mesh data after processing with dedicated software (e.g., [@bib30]; [@bib27]; [@bib13]; [Figure 3A](#fig3)) or as 3D volumetric data ([Figure 3E](#fig3)). For the latter, brainrender takes care of the conversion of voxels into a 3D mesh for rendering. Furthermore, custom 3D meshes can be created to visualize different types of data. For example, brainrender can import JSON files with tractography connectivity data and create ‘streamlines’ to visualize efferent projections from a brain region of interested ([Figure 3B](#fig3)).\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": "
Rendering scene, press \"q\" to close\n\n" }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Figure 3\n", "'''\n", "\n", "figure: Figure 3.\n", ":::\n", "![](article.ipynb.media/fig3.jpg)\n", "\n", "### Visualizing different types of data in brainrender.\n", "\n", "(**A**) Spread of fluorescence labeling following viral injection of AAV2-CRE-eGPF in the superior colliculus of two FLEX-TdTomato mice. 3D objects showing the injection sites were created using custom python scripts following acquisition of a 3D image of the entire brain with serial two-photon tomography and registration of the image data to the atlas’ template (with brainreg, [@bib30]). (**B**) Streamlines visualization of efferent projections from the mouse primary motor cortex following injection of an anterogradely transported virus expressing fluorescent proteins (original data from [@bib21]), downloaded from (Neuroinformatics NL with brainrender). (**C**) Visualization of the location of several implanted neuropixel probes from multiple mice (data from [@bib28]). Dark salmon colored tracks show probes going through both primary/anterior visual cortex (VISp/VISa) and the dorsal lateral geniculate nucleus of the thalamus. (**D**) Single periaqueductal gray (PAG) neuron. The PAG and superior colliculus are also shown. The neuron’s morphology was reconstructed by targeting the expression of fluorescent proteins in excitatory neurons in the PAG via an intersectional viral strategy, followed by imaging of cleared tissue and manual reconstruction of the neuron’s morphology with Vaa3D software. Data were registered to the Allen atlas with SHARPTRACK [@bib24]. The 3D data was saved as a .stl file and loaded directly into brainrender. (**E**) Gene expression data. Left, expression of genes ‘brn3c’ and ‘nk1688CGt’ in the tectum of the larval zebrafish brain (gene expression data from fishatlas.neuro.mpg.de, 3D objects created with custom python scripts). Right, expression of gene ‘Gpr161’ in the mouse hippocampus (gene expression data from [@bib32]), downloaded with brainrender (3D objects created with brainrender). Colored voxels show voxels with high gene expressions. The CA1 field of the hippocampus is also shown.\n", ":::\n", "{#fig3}\n", "'''\n", "from myterial import indigo as scmcol\n", "from myterial import indigo_dark as scscol\n", "from myterial import blue_darker as zicol\n", "\n", "\n", "\n", "# Panel (A)\n", "scene = make_scene()\n", "for reg, col in zip((\"SCm\", \"SCs\", \"ZI\"), (scmcol, scscol, zicol)):\n", " scene.add_brain_region(reg, color=col, silhouette=True)\n", "render_scene(scene)\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": "
downloading ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00\n\n" }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": "
Rendering scene, press \"q\" to close\n\n" }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Panel (B)\n", "from myterial import blue_grey\n", "from myterial import salmon_dark as streamlinescol\n", "\n", "from brainrender.atlas_specific import get_streamlines_for_region\n", "from brainrender.actors.streamlines import Streamlines\n", "\n", "scene = make_scene()\n", "\n", "# get streamlines data\n", "streams = get_streamlines_for_region(\"MOp\")\n", "\n", "# add Streamlines actors\n", "s = scene.add(Streamlines(streams[0], color=streamlinescol, alpha=1))\n", "\n", "# add brain regions\n", "th = scene.add_brain_region(\n", " \"TH\", alpha=0.45, silhouette=False, color=blue_grey\n", ")\n", "\n", "\n", "render_scene(scene)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": "
Rendering scene, press \"q\" to close\n\n" }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Panel (C)\n", "from brainrender.actors import Points\n", "\n", "try:\n", " from oneibl.onelight import ONE\n", "except ImportError:\n", " %pip install ibllib\n", " from oneibl.onelight import ONE\n", "\n", "from myterial import blue_grey, blue_grey_dark, salmon_light, salmon_darker\n", "\n", "scene = make_scene()\n", "\n", "\n", "# download probe data from ONE\n", "one = ONE()\n", "one.set_figshare_url(\"https://figshare.com/articles/steinmetz/9974357\")\n", "\n", "# select sessions with trials\n", "sessions = one.search([\"trials\"])\n", "\n", "# get probe locations\n", "probes_locs = []\n", "for sess in sessions:\n", " probes_locs.append(one.load_dataset(sess, \"channels.brainLocation\"))\n", "\n", "# get single probe tracks\n", "for locs in probes_locs:\n", " k = int(len(locs) / 374.0)\n", "\n", " for i in range(k):\n", " points = locs[i * 374 : (i + 1) * 374]\n", " regs = points.allen_ontology.values\n", "\n", " # color based on if probes go through selected regions\n", " if \"LGd\" in regs and (\"VISa\" in regs or \"VISp\" in regs):\n", " color = salmon_darker\n", " alpha = 1\n", " sil = 1\n", " elif \"VISa\" in regs:\n", " color = salmon_light\n", " alpha = 1\n", " sil = 0.5\n", " else:\n", " continue\n", "\n", " # render channels as points\n", " spheres = Points(\n", " points[[\"ccf_ap\", \"ccf_dv\", \"ccf_lr\"]].values,\n", " colors=color,\n", " alpha=alpha,\n", " radius=30,\n", " )\n", " spheres = scene.add(spheres)\n", " scene.add_silhouette(spheres, lw=sil)\n", "\n", "\n", "# Add brain regions\n", "visp, lgd = scene.add_brain_region(\n", " \"VISp\",\n", " \"LGd\",\n", " hemisphere=\"right\",\n", " alpha=0.3,\n", " silhouette=False,\n", " color=blue_grey_dark,\n", ")\n", "visa = scene.add_brain_region(\n", " \"VISa\",\n", " hemisphere=\"right\",\n", " alpha=0.2,\n", " silhouette=False,\n", " color=blue_grey,\n", ")\n", "th = scene.add_brain_region(\n", " \"TH\", alpha=0.3, silhouette=False, color=blue_grey_dark\n", ")\n", "th.wireframe()\n", "scene.add_silhouette(lgd, visp, lw=2)\n", "\n", "render_scene(scene)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": "
Rendering scene, press \"q\" to close\n\n" }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Panel (D)\n", "from brainrender.actors import Point\n", "\n", "from myterial import blue_grey_light as scmcol\n", "from myterial import blue_grey as pagcol\n", "from myterial import salmon_dark as neuroncol\n", "\n", "cam = {\n", " \"pos\": (-16954, 2456, -3961),\n", " \"viewup\": (0, -1, 0),\n", " \"clippingRange\": (22401, 34813),\n", " \"focalPoint\": (7265, 2199, -5258),\n", " \"distance\": 24256,\n", "}\n", "\n", "scene = make_scene()\n", "\n", "# add brain regions\n", "pag = scene.add_brain_region(\"PAG\", alpha=0.4, silhouette=False, color=pagcol)\n", "scm = scene.add_brain_region(\"SCm\", alpha=0.3, silhouette=False, color=scmcol)\n", "\n", "# add neuron mesh\n", "neuron = scene.add(r\"C:\\Users\\Federico\\Documents\\GitHub\\BrainRender\\paper\\data\\yulins_neuron.stl\")\n", "neuron.c(neuroncol)\n", "\n", "# add sphere at soma location\n", "soma_pos = [9350.51912036, 2344.33986638, 5311.18297796]\n", "point = scene.add(Point(soma_pos, color=neuroncol, radius=25))\n", "scene.add_silhouette(point, lw=1, color=\"k\")\n", "scene.add_silhouette(neuron, lw=1, color=\"k\")\n", "\n", "# slice scene repeatedly to cut out region of interest\n", "p = [9700, 1, 800]\n", "plane = scene.atlas.get_plane(pos=p, plane=\"frontal\")\n", "scene.slice(plane, actors=[scm, pag, scene.root])\n", "\n", "p = [11010, 5000, 5705]\n", "plane = scene.atlas.get_plane(pos=p, norm=[0, -1, 0])\n", "scene.slice(plane, actors=[scene.root])\n", "\n", "# render\n", "render_scene(scene, camera=cam, zoom=9)\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "ename": "FileNotFoundError", "evalue": "File C:\\Users\\Federico\\Documents\\GitHub\\BrainRender\\paper\\data\\T_AVG_brn3c_GFP.obj not found", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mFileNotFoundError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m