Can I use grayscale images when working with ImageJ?

I am using ImageJ to analyze Western Blots. I have scanned films in as grayscale images because this is how we did it in my old lab. People in my current lab are not satisfied with that explanation and think I should consider using color images. I've been searching for protocols but they all seem to address how to use the program once you have some films scanned.

Can anyone help me figure out what considerations go into choosing how I scan my films?

There is no evidence that one is better than the other, most likely because it differs from case to case. Neither you, nor your critics, are right. There is a tiny bit of science in a paper on digitizing blots, generalizing from blots of a specific protein (PMID: 19517440), and they use grayscale for no given reason. Come to think, that is the best paper in the field of immunoblot quantification., and it still lacks evidence.

Immunoblot is semiquantitative, meaning it may sort and order bands (and protein amounts), but it fails often at estimating differences and ratios, and may even fail to see differences. Just try your best to find a difference, in the knowledge that if there is no difference in protein amounts, any fiddling with contrast won't create a difference in bands quantitation.

There is almost no difference between color and black-and-white digitized images of an immunoblot. The parts of the blot that are black will still appear in the computer as White = 0, even though a tri-color image file will detail it to Red = 0, Green = 0, Blue = 0. The parts that are nearly black will still be very close to zero.

The only thing that changes a lot between RGB and color is the background. The background may change, for example, from White = 200, to Red = 190, Green = 190, Blue = 220. There is one consequence of such switching between values for background. (These changes may be induced by switching between grayscale and RGB, but also by altering exposure time etc.) When you use a method where the background gets too close to the blots, the "amplitude" of the blots, the height of the peaks in ImageJ, will be reduced. This loss of contrast should wash out some of the differences between blot bands and lessen support for differences between bands (i.e., for the alternative hypothesis), making you miss a genuine protein difference.

Again, if your blot bands are different enough so that your percent increase / decrease is huge, and your p value is small, you are all set, regardless of the method you chose. Ask your critics to explain your observed differences other than by differences in protein. But if your quantification fails the p test, I think it will be a honest effort, to change the color from grayscale to RGB, or the other way around, just in case your first choice had been the one that dampens contrast.

Analyzing fluorescence microscopy images with ImageJ

This work is made available in the hope it will be useful to researchers in biology who need to quickly get to grips with the main principles of image analysis.

Much of the initial text was written during a time when I lived and worked in Heidelberg, which is reflected in many of the illustrations.

The original handbook was a PDF created with LaTeX. This PDF version is still probably the best for printing. You can find it at ResearchGate.

However, since then the content has been revised and translated into AsciiDoc for three reasons:

To make it available as a website, through GitBook

To make it easier and faster to update the contents

To make possible more community involvement, both through Gitbook’s own comments and discussions, or using the source code hosted on GitHub

This book is based primarily on the Wayne Rasband’s fantastic ImageJ. Nevertheless, the range of flexible and powerful open source software and resources for bioimage analysis continues to grow. With that in mind, you might also consider becoming familiar with some alternatives as well, such as:

ImageJ2 and Icy, which are designed to handle a very wide range of applications

CellProfiler and KNIME, especially for high-throughput analysis and data mining

ilastik, especially when its powerful machine learning features are needed to identify or classify challenging structures

QuPath, especially for digital pathology or whole slide image analysis [1]

Finally, the goal of this handbook is to give enough background to make it possible to progress quickly in bioimage analysis. To go deeper, as a complement to this book I highly recommend the excellent (and free) Bioimage Data Analysis, edited by Kota Miura.

All in all, I hope that someone might find this a useful introduction, and it may play a small part in helping to support the use and development of open source software and teaching materials for research.

Creating Pseudocolor Images using ImageJ

It is common for microscopists to examine fluorescent samples that have been exposed to a mixed set of fluorophores with specific staining properties. Images are often captured from these samples as a set, using different filter sets to isolate specific elements of the sample. Sometimes the images are captured in gray scale, and in other cases, they are captured with color cameras. In either case, one would often like to combine the images to create a single image that presents the information from all fluorophores at once. This kind of display is particularly useful for determining the relative locations of different components.

The public domain program, ImageJ, written by Wayne Rasband at the National Institutes of Health, Bethesda, MD. (, and extensively supplemented by many contributors, contains a simple routine for taking three gray scale images, assigning a separate color to each and merging them into an RGB pseudocolor image.

The process starts with the set of three images such as the following, which are derived from buccal cells stained with Wheat Germ Agglutinin-Rhodamine, MitoTracker-Green, and DAPI, respectively.

The three pictures are opened individually in ImageJ.

Then, the menu item “Image>color>RGB merge” is selected.

This brings up a dialog box that allows one to enter the specific images that are to form the final product.

Note that one has the option to enter any of the pictures in any of the channels. The most direct is to put each image into the appropriate channel. That is, red to red, green to green, etc. It is generally useful to check the “Keep source images” box for further experimentation. After clicking on “OK”, the RGB image is generated (Figure 4).

It is also possible to manipulate the channels for special purposes. As an example, the image from the red source was used for both the red and the blue channels, (omitting the blue data) to create an image of magenta and green (Figure 5). This display is particularly useful for those viewers who are deuteranopes (red/green color blind).

Once the image is formed, it is possible to adjust each of the color channels independently using the menu item “Image>Adjust>Color Balance”

There are several constraints that must be considered when preparing such images.

First, the separate images need to be aligned. Unfortunately, ImageJ does not yet have the ability to superimpose layers that could be moved independently, as in Photoshop. On the other hand, there is a registration plugin available that allows either manual or automatic alignment prior to image assembly.

Second, it is important that the intensity distribution in each of the images be within the 8-bit dynamic range. This avoids oversaturation in the final image.

Third, it should be realized that images such as these are transforms from the original data. Each color channel activates a single type of pixel in the display. As a result, some of the subtlety of imaging is lost. DAPI, for instance has a significant fluorescence in the green wavelengths, which is lost with this kind of image, but might be retained in a color original.

Indeed merging more than three channels results in visually ambiguous images when statically rendered (both due to the rendering technology and the ability for the viewer to resolve colours). However, we routinely use composite images of more than three colours but using an interactive ability to conveniently to adjust the channels (enable/disable/recolour etc), you can still usefully explore such the images.

The native tools in ImageJ are allow you to do this (though the UI is a little cumbersome), hence we wrote CompositeAdjuster just for this sort of situation.

Kevin Eliceiri

Dr. Kevin Eliceiri is an Associate Professor of Medical Physics and Biomedical Engineering and director of the Laboratory for Optical and Computational Instrumentation (LOCI) at the University of Wisconsin-Madison. Eliceiri completed his bachelor’s and doctoral degree in Biotechnology and Biomedical Engineering at the University of Wisconsin in Madison. In 2008, he founded LOCI and started… Continue Reading

Can I use grayscale images when working with ImageJ? - Biology

ThermImageJ - Thermal Image Functions and Macros for ImageJ

ThermImageJ is a collection of ImageJ functions and macros to allow for import and conversion of thermal image files and to assist in extracting raw data from infrared thermal images and converting these to temperatures using standard equations in thermography.

These macros will allow you to import most FLIR jpgs and videos and process the images in ImageJ.

ThermImageJ emerged from Thermimage (, the latter an R package with similar tools but more emphasis on biological heat transfer analysis.

  • ThermImageJ was developed on OSX, and tested using ImageJ v1.52o. Many features require installation of command line tools that may present future challenges on different operating systems. Testing and troubleshooting is ongoing, especially in Windows. Please report issues here:, or consider converting to Mac or Linux.
  • Glenn J. Tattersall. (2019). ThermImageJ: Thermal Image Functions and Macros for ImageJ. doi:10.5281/zenodo.2652896.

External Software Downloads

  • FIJI is Just ImageJ. Download instructions:
  • Exiftool. (The standalone executable file is suggested).
    • Installation instructions:
    • Download instructions:
    • Choose the “get packages and executable files” to facilitate an easy installation unless you prefer to work with the source code
    • Choose the static build.
    • Installation instructions:
    • Download instructions:

    ThermImageJ Downloads from this Github site

    All ThermImageJ files can be easily downloaded as a ZIP file by clicking on the green Clone or Download button and then selecting Download ZIP (, or by going to the Recent Releases ( and selecting the Source Code link for the most recent release. Unzip this folder on your computer for access to the toolset, luts, and the files located in their appropriate subfolders.

    The primary files you need to extract from this site are:

    A custom perl script (, provided on this github repository, which can be downloaded and placed in a scripts folder with ImageJ:

    ThermImageJ macro toolset. A text file (.ijm) containing all the macros and functions:

    Additional Look Up Tables (LUTS), popularly used in thermal imaging, available on this github repository:

    • Install FIJI, exiftool, perl, and ffmpeg according to the website instructions above.
    • Troubleshoot or perform installation checks (see next section).
    • Launch FIJI and follow any update instructions.
    • Launch FIJI–>Help–>Update, allow it to update any plug-ins, then while the update window is open, select Manage update websites, and ensure that the FFMPEG box is ticked. Select ok, then click the Apply option, and restart FIJI. This FFMPEG plugin is required for importing avi files created during the conversion process, although it might require that you have FFMPEG installed at the command line.
    • Navigate to where FIJI is installed to find all the subfolders.
    • Download the ThermImageJ.ijm file from this site and copy into the FIJI/macros/toolsets folder.
    • Open the ThermImageJ.ijm file in any text editor, and verify the paths are properly set for your respective operating system. See the comments with the text file for guidance. Most of the default locations are likely fine, although FFMPEG is sometimes installed in different folders depending on what the user might have selected.
    • Download the additional luts files from this site and copy into your FIJI/luts folder. These are palettes that are commonly used in thermal imaging.
    • Download the perl script, from this site and copy into a FIJI/scripts folder.
    • Download Byte_Swapper.class to the plugins folder.
    • Restart ImageJ.
    • If everything succeeded (see checks below), the toolset should be installed and visible from your plugins menu.

    Verify exiftool is installed by launching a terminal (or cmd prompt) window and typing the following bash commands:

    If you see a version number (probably > 10) and no error, then exiftool is installed properly. The second line will tell you the path to where it is installed.

    Verify no errors on your system to ensure perl is installed correctly.

    Check that the perl script is accessible by perl (be sure to provide the proper path to the file on your system):

    You should see the following warning message:

    “Error: Please specify input file, output folder, the output filename base, pattern to split, and output file extension.”

    This is a good error, and verifies that the perl script is installed where your machine can access it!

    “Can’t open perl script”/Applications/“: No such file or directory”

    you will need to re-check the location of the script or the path information provided at the top of the ThermImageJ.ijm file.

    Now, do the same for ffmpeg:

    Setting and verifying paths to command line tools

    Once you have installed everything above, and verified no errors, you can check or change the directory paths in FIJI/ImageJ.

    The ThermImageJ.ijm toolset file will detect whether you are using Mac OSX, Linux or Windows and attempts to define the appropriate file paths automatically. Thus, you should not need to change parameters, but it is useful to check and become familiar with the process or do any customisation necessary for your FIJI installation.

    Navigate to the ThermImageJ.ijm toolset file and open it using a text editor or the built-in ImageJ macro editor:

    Depending on your operating system or how system files are installed you may need to edit the specific path locations for your respective system:

    This also applies to the location of the file that should be placed in the scripts folder inside the Fiji folder.

    ThermImageJ assumes you have placed the file into a scripts subfolder where Fiji is installed, so hopefully you will not need to change this:

    Setting ThermImageJ Macros Up in FIJI/ImageJ

    Once you have installed everything above, and verified no errors, you can set the macros up in FIJI/ImageJ.

    Launch FIJI, left click the more-tools menu, which is the >> on the far right side of the menu bar:

    Which will reveal any of the toolsets in the folder. Click on ThermImageJ to replace the present icons with ThermImageJ specific icons / macros:

    Once installed, the toolbar menu populates with new icons corresponding to the primary ThermImageJ functions:

    Once installed, the toolset should also populate the Plugin Dropdown Menu with the same, and some additional macros used less often:

    Feel free to edit your version of ThermImageJ.ijm and if you break it, you can always download a new one.

    You can edit it with any text editor or with the built-in ImageJ text editor by selecting Plugins–>Macros–>Edit and navigating to the Fiji/macros/toolset folder and selecting the ThermImageJ file. Or from within ImageJ/Fiji, holding the shift key down, select the >> “More Tools” link and still hold the shift key down, click on ThermImageJ to open the file up within the built-in text editor.

    If you do make changes and save them, you will either need to restart Fiji, or restore the toolset bar by clicking on the >> “More Tools” link, selecting Restore Start-Up Tools then clicking on the >> “More Tools” link and selecting ThermImageJ again.

    Main Functions and Features

    Direct Import of Raw Data

    • Raw Import Mikron RTV
      • custom macro to import an old Mikron Mikrospec R/T video format
      • these files had simple encoding and are not likely in use any longer, except by the author
      • see for sample data
      • custom macro to import FLIR SEQ using the Import-Raw command
      • use only if you know the precise offset byte start and the number of bytes between frames (see Frame Start Byte Macro below).
      • this only works for certain SEQ files (usually those captured to computer), and only formats where tiff format underlies the video.
      • see for sample data
      • This macro will scan a FLIR video file (SEQ) for the offset byte position ‘0200wwwwhhhh’ where wwww and hhhh are the image width and height in 16-bit little endian hexadecimal.
      • For example, the magicbyte for a 640x480 camera: 02008002e001“,”8002" corresponds to 640 and “e001” corresponds to 480.
      • The user can provide a custom magicbyte, but should leave this blank otherwise.
      • The function is only used in conjunction with the Raw Import FLIR SEQ macro.
      • The function returns best estimates for the offset and gap bytes necessary for use with the Raw Import FLIR SEQ macro, although is not guaranteed to be correct due to variances in SEQ file saving convention.
      • Note: on unix based OS, this macro calls the xxd executable and runs quickly. For Windows OS, Powershell Core 6 needs to be installed with the updated Format-Hex function, and runs slowly.

      Import (and Conversion) using Command-Line Programs

      • Convert FLIR JPG (from the Plugins->Macros Menu only)
        • select a candidate JPG or folder of JPGs, and a call to the command line tool, exiftool, is performed to extract the raw-binary 16 bit pixel data, save this to a gray scale tif or png, placed into a ‘converted’ subfolder.
        • subsequently the user can import these 16-bit grayscale images and apply custom transformations or custom Raw2Temp conversions.
        • some images may be converted in reverse byte order due to FLIR conventions. These can be fixed with the Byte Swapper plugin after import.
        • select a candidate JPG, and a call to the command line tool, exiftool, is performed to extract the raw-binary 16 bit pixel data, temporarily save this to a gray scale tif or png, import that file, and calls the Raw2Temp function using the calibration constants derived from the FLIR JPG file.
        • Import: select a candidate SEQ file, and a call to the command line tools, exiftool, perl, and ffmpeg is performed to extract each video frame (.fff) file, extract the subsequent raw-binary 16 bit pixel data, save these as a series of gray scale files, and collate these into an .avi file or a new folder of png or tiff files. Subsequent .avi file is imported to ImageJ using the Import-Movies (FFMPEG) import tool.
        • jpegls as the output video codec is advised for its high compression, lossless quality, and compatibility between different OS versions of FFMPEG.
        • this function may also work on FCF file types but has not been thoroughly tested
        • Convert: this function may also be used to convert the video into a folder of png or tiff files by selecting png or tiff as the output filetype, instead of avi. File codec is ignore if you choose this approach. The folder will be automatically named according ot the video file without extension. Thus, SampleVid.seq will be converted to files in the folder called SampleVid.
        • Import: select a candidate CSQ file, and a call to the command line tools, exiftool, perl, and ffmpeg is performed to extract each video frame (.fff) file, extract the subsequent raw-binary 16 bit pixel data, save these as a series of gray scale files, and collate these into an .avi file or a new folder of png or tiff files. Subsequent .avi file is imported to ImageJ using the Import-Movies (FFMPEG) import tool.
          • jpegls as the output video codec is advised for its high compression, lossless quality, and compatibility between different OS versions of FFMPEG.

          Lookup tables and adjusting colour ranges

          • LUT (Thermal Palette Look Up Table) menu
            • for rapidly accessing different pseudocolour palettes
            • Grays, Ironbow, and Rainbow are more commonly used in thermal imaging
            • ImageJ’s built in LUTs can be always be accessed from the Image-Lookup Tables Menu
            • select the next LUT in the list of all ImageJ LUTs, including the ones in the Thermal LUT list
            • select the previous LUT in the list all ImageJ LUTs, including the ones in the Thermal LUT list
            • invert the colour scale of the LUT
            • this can be toggled
            • setting the min and max values of the pseudocolour scale
            • set min equal to the lowest temperature desired on the lookup table scale
            • set max equal to the highest temperature desired on the lookup table scale
            • short-cut to ImageJ’s built-in Analyze->Tools->Calibration Bar
            • use this after temperature conversion of image
            • the tool attempts to choose an appropriate sized calibration bar by auto-adjusting the zoom factor
            • to save this permanently on an image you need to duplicate and/or convert your image to RGB format (Image–>Type->RGB Color), then flatten the overlay (Image–>Overlay–>Flatten), then save as a tiff or png. Note: the resolution of the text on the calibration bar depends on your image size and be be unsatisfactory with small images. If I find a fix for this, I will implement it.
            • Image Byte swap
              • short-cut call to the Byte Swapper plugin.
              • since FLIR files are sometimes saved using little endian order (tiff) and big endian order (png), a short-cut to a pixel byte swap is a fast way to repair files once they are imported that have byte order mixed up
              • FLIR Dates
                • user selects a candidate FLIR file (jpg, seq, csq) to have the Date/Time Original returned. Use this to quickly scan a file for capture times.
                • select a candidate FLIR file (jpg, seq, csq) to display the calibration constants and built-in object parameters stored at image capture. Typically, the user would then use the Planck constants and Object Paramters in the Raw2Temp macro.
                • use this function on the original FLIR file if you have a 16-bit grayscale image of the raw data in a separate file and need to convert to temperature under specified conditions.
                • the calibration constants and object parameters are stored in memory for subsequent use of the Raw2Temp function, and should be remembered the next time you re-boot ImageJ, so if you are only working with one thermal camera’s files, you should not have to re-type the calibration constants for future uses.
                • Raw2Temp
                  • converts a 16-bit grayscale thermal image (or image stack) into estimated temperature using standard equations used in infrared thermography.
                  • user must provide the camera calibration constants, atmospheric attenuation constants, and object parameters that can be obtained using the FLIR Calibration Values macro.
                  • various custom versions of Raw2Temp are included for different cameras the author has used, since the calibration constants do not change from image to image, and only when the camera is sent back to manufacturer for re-calibration. Edit these macros once calibration constants are known for other cameras.
                  • a Fast and Slow calculation have now been implemented (v 1.4.1).
                  • The Slow calculation is slow because it converts the file to a 32-bit file and then converts each pixel to its calculated temperature. This can take time on video files and may be too much for large files or computers with little RAM.
                  • The Fast calculation implements a built-in ImageJ function that allows us to fit a 4th order polynomial through the relationship between Temperature and the Raw 16 bit data, providing a pseudo-converted file with both raw data and converted data showing up in the imageJ status bar. This may or may not be an accurate depiction of the response, although I provide the user with cautionary advice on the numbers returned. If you restrict the range of temperatures to fit the polynomial only to reasonable ones for most biological applications, the error seems to be quite low since the polynomial accurately fits the data. It is mainly at the extremely low and high ends of the camera’s temperature range that the fit is poor. For single image analysis, I advise you use the Slow (accurate) conversion, and only consider the Fast conversion for large videos where some trade-off between CPU time vs. accuracy is more crucial.

                  ROI (Region of Interest) Tools

                  • ROI 1 to ROI 6 (from the Plugins->Macros menu)
                    • macros coded to short-cut keys, such as: 1,2,3,4,5,6 by adding [#] to the name of the macro in the ThermImageJ.ijm file
                    • some extra ROI short-cuts (i.e. d, l) might exist that I have in place for my own analyses - you can ignore these line ROIs
                    • extracts mean, min, max, sd, and area of the given ROI and saves to results window as well as to a ROI_Results.csv file to user’s desktop
                    • location of ROI_Results.csv file can be changed by user by editting the variable desktopdir at the top of the ThermImageJ.ijm file
                    • sample results file:
                    • edit the ThermImageJ.ijm to change the nature of the results to extract
                    • addition ROIs can be added to the toolset file
                    • designed to work with single images or image stacks
                    • slice label and number are recorded to the results table as:

                    • Extract ROI Pixel Values (short-cut: p)
                      • extracts the ROI pixel values to a results window table with X,Y,Value coordinates
                      • useful if you want to replot only your ROI data in another software environment
                      • useful if you need to perform different analyses on the data
                      • based on a macro from
                      • adds the result of the ROI parameter to the image as an overlay.
                      • will work on stacks or single images.
                      • ROI on Entire Stack
                        • performs an ROI analysis across the entire stack.
                        • min, max, mean, median, mode, skewness, kurtosis for every slice are exported to the results window and to file to desktop
                        • select what summary statistic to perform discrete fourier analysis to extract dominant frequency components.
                        • i have tested this on oscillatory data (metronomes set to move at fixed rates) and the fourier extracted frequencies appear to work correctly
                        • This function works on stacks, first by subtracting the difference in pixel values between frames, creating an absolute value difference stack n-1 frames in length.
                        • Then all pixels from each frame are examined for the mean and standard deviation per frame, stored to the results window, after which a cumulative value is calculated.
                        • This cumulative absolute difference value is then detrended and zeroed to remove mean value offset prior to a discrete fourier analysis to return freuquency components.
                        • The user should provide time interval in seconds for the image stack.

                        Converted JPG to raw 16-bit PNG or TIFF Workflow

                        • Determine your FLIR camera’s calibration constants (i.e. use the Calibration Values Tool)
                        • Convert Image to a 16-bit Grayscale File (i.e. Convert FLIR JPG)
                        • Import converted file to ImageJ using normal ImageJ file recognition. File->Open or File->Import Image Sequence can work on PNG and TIFF files.
                        • You may prefer to work with TIFF files, but the filetype created by these macros depends on how the raw data were stored by FLIR (PNG or TIFF). If so, you might use ImageJ’s batch conversion tool to convert your files before importing them.
                        • Run the FLIR Calibration values macro on the original FLIR file in order to extract the calibration constants into memory
                        • Run Raw2Temp or one of the custom Raw2Temp macros for your particular camera
                        • Choose your palette (LUT in ImageJ)
                        • Use ImageJ ROI tools and Measurement tools
                        • Use the Import JPG tool which will scan the file for calibration constants, extract the raw thermal image, convert this to a PNG or TIFF file, and automatically open it.
                        • Inspect the opened image, calibration constants, and object parameters to ensure that these values are appropriate to your application.
                        • Choose your palette (LUT in ImageJ)
                        • Use ImageJ ROI tools and Measurement tools
                        • Use the Import SEQ or Import CSQ functions that scan the file to determine calibration constants before import
                        • Select the video import option and jpegls as the codec (i.e. the defaults) This will keep file size as small as possible and preserves compatibility with the ImageJ FFMPEG implementation
                        • The Import SEQ and Import CSQ macros will automatically attempt to calculate temperature
                        • Once the file is converted and imported, double check that the calibration constants and object parameters are appropriate and select ok. If you escape at this stage, you should still have a 16-bit grayscale image stack, and could run the Raw2Temp function later
                        • First set the parameters you are interested in extracting in the Analyze->Set Measurements menu.
                        • Typical values are min, max, mean, modal, median, standard deviation, but ImageJ offers so many other values.
                        • In ImageJ terminology, “Intensity” or “Gray Value” corresponds to the number stored in each pixel. This might be the 16-bit raw value or it might be the 32-bit decimal converted temperature, depending on when analysis is performed.
                        • Take advantage of all the ImageJ ROI tools, or Tools->ROI Manager to draw regions of interest over sites of interest.
                        • Or, use the ROI 1-6 macros included as described earlier in the document.

                        See this screen capture introducing the basic installation steps.

                        Video Guide: Demonstration of Functions

                        See this screen capture demonstrating basic functions here:

                        Performance, Speed, File Size Limits, and Caveats

                        • The maximum number of video frames (i.e. stacks) will limited by the CPU and RAM, but success with videos and image stacks of up to

                        The radiometric file types at present supported are mainly those from FLIR, however certain file types that can be imported into ImageJ could be used in the future, depending on information from users. Deciphering the radiometric data storage approaches takes time and requires sample files.

                        For a discussion about the Babylonian nature of thermal image file types and strategies employed by thermal camera manufacturers see

                        The following open source programs were crucial to the development of ThermImageJ.

                        ImageJ Macro Development occurred in association with:

                        Command Line Development occurred in assocation with:

                        Raw2Temp development occurred in association with:

                        Suggestions for improvements and additions, as well as bugs or issues can be filed here:

                        Please include a sample image to help with solving issues

                        Please star or follow this github site to keep up to date on new releases as I fix errors following further testing.

                        ThermImageJ will still remain a work in progress as I add features that are useful to myself, but might not be readily apparent to other users. Occasional odd short-cuts that are present are likely the result of a project I am currently working on.

                        Note: I have no affiliation with thermal image companies nor do I receive any funding or free equipment despite the plethora of customers I have sent to them. This project emerged as a result of the frustration of needing to use Windows only software that has limited journaling and customisation. I should acknowledge that in July 2019, FLIR released a more affordable cross-platform analysis software that some users may prefer to invest in rather than this open source solution. It would be unfair of me to not recommend that you try their software first, since they are the experts.

                        Use Science Software ImageJ To Transform Your Photos and Videos

                        Bob Goldstein is a cell biologist at the University of North Carolina at Chapel Hill and an occasional contributor to Make magazine.

                        • Facebook
                        • Like
                        • Twitter
                        • Pinterest
                        • LinkedIn
                        • Digg
                        • Del
                        • Tumblr
                        • VKontakte
                        • Print
                        • Email
                        • Flattr
                        • Reddit
                        • Buffer
                        • Love This
                        • Weibo
                        • Pocket
                        • Xing
                        • Odnoklassniki
                        • WhatsApp
                        • Meneame
                        • Blogger
                        • Amazon
                        • Yahoo Mail
                        • Gmail
                        • AOL
                        • Newsvine
                        • HackerNews
                        • Evernote
                        • MySpace
                        • Viadeo
                        • Line
                        • Flipboard
                        • Comments
                        • Yummly
                        • SMS
                        • Viber
                        • Telegram
                        • Subscribe
                        • Skype
                        • Facebook Messenger
                        • Kakao
                        • LiveJournal
                        • Yammer
                        • Edgar
                        • Fintel
                        • Mix
                        • Instapaper
                        • Copy Link

                        editors note: this is a 2018 update to an article Bob wrote for Make magazine, volume 27, July 2011

                        ImageJ is a freely-available program for processing images and videos. It was made for scientists for images made on microscopes, but it’s available to anyone. And anyone is free to add new functions to the program. As a result, it improves constantly by contributions from the more programming-savvy among its users. In this sense, ImageJ is a great resource for images and videos much as the open source browser Firefox is for web browsing, or indeed as Wikipedia is for information: each will work on just about any computer, and anyone can make them better. But ImageJ is not yet well known outside of science.

                        As a scientist and a fan of creative tinkering, I thought it would fun to see what might result if Make readers were introduced to some of the tricks that ImageJ can perform. If you have a few images or a short video made on a digital camera or a webcam, you can expect to transform them as in the examples that follow within about 20 minutes.

                        Where ImageJ came from

                        ImageJ was written for biomedical researchers by an employee of the US National Institutes of Health, programmer Wayne Rasband. It’s Java-based, so it works on PCs, Macs, and Linux computers. Rasband designed ImageJ with an open architecture: anyone could write plugins to add new tricks to the program.

                        ImageJ was first released way back in 1997, in the ancient, pre-Google era of computers. Rather than losing popularity over the years, its open architecture is making it an increasingly valuable tool to science. I was curious to know if Rasband foresaw the potential of an open architecture way back in 1997, so I emailed him. “I always liked seeing other people use the software I wrote”, wrote Rasband, “and I have always made the source code freely available. I discovered if I created software that was easy to extend, and I gave away the source code, then I would get code contributions back from the user community.”

                        What it can do

                        ImageJ is useful for processing images using filters similar to those found on programs like Photoshop. With hundreds of plugins available, there are hundreds of tricks to try. My own favorite tricks are the ones that can transform a video you’ve made by displaying motion from the video on a single image. I also like using ImageJ to perform math on images — combining two images by simply adding pixel intensities, for example.

                        Get started

                        You’ll want to start by downloading the ImageJ application and adding a couple of useful plug-ins:

                        1. Download ImageJ
                          Start by downloading ImageJ from the Fiji site. Fiji is just a version of ImageJ with a set of plugins already included and with an automatic update function so that your copy of ImageJ will keep up to date with useful changes. If you are using a Mac, put Fiji in your Applications folder.
                        2. Add some extra plug-ins
                          There are loads of plugins that you can add, with bewildering names like Point Picker, Spectrum Extractor, and Lipschitz Filter.For now, I recommend adding just one useful set of plugins, some of which you’ll use in the examples on the next few pages. This set is called “Cookbook” and can be uploaded easily using these instructions.The Cookbook plugins will appear under a new pulldown (at the top of your screen) called “Cookbook”.

                        Follow along and try the examples below with your own images

                        On the next few pages, I show some images that I made using ImageJ, along with a short description of each image so that you can get an idea of how each image was made.

                        In italics below each example, you can find a detailed, step-by-step “what to click” protocol for each of these examples, so that you can try doing similar things with your own photos and videos. You may want to try the examples at first with small images and very short videos, and perhaps using grayscale images and videos (or converting them to grayscale or 8-bit RGB when prompted), since the larger your files, the longer each step will take.

                        1. Image calculator

                        The Image Calculator tool can combine images in various ways using simple math, for example by adding, subtracting, or averaging the colors of each pixel. To illustrate, in the images shown above, the pixel in the lower left hand corner of the snowflake image has a color that’s coded as 12,6,10 in RGB, meaning that it has Red set at 12, Green set at 6, and Blue set at 10. These numbers are out of 255, with 0 being dark, and 255 bright. So a pixel color of 12,6,10 — all low numbers out of 255 — means that it’s a pretty dark pixel. The pixel at the corresponding position in the Lincoln photo is colored 70,54,42. The result of averaging these two? A pixel with the color 41,30,26: exactly the average between the two sets of numbers. The Image Calculator treats each pixel this way.

                        This can be a powerful tool for combining images in predictable ways. What would happen if you displayed the difference between two nearly-identical photos?

                        You’ll see when trying the Image Calculator that adding, subtracting, and averaging are just a few among several ways to combine images.

                        Click FILE / OPEN and select an image on your computer, and then repeat to open a second image. Then click PROCESS / IMAGE CALCULATOR, select each of your image names, and then select an operation like ADD, AVERAGE or DIFFERENCE to combine the images.

                        2. Projecting a stack

                        Here, I’ve taken a short video during a lightning storm. This lightning flash took about a second to cross the sky. On the video, distinct parts of the flash could be seen in separate frames of the video (the 10 frames at top in the image below). To see what the whole flash looked like, I added the light in frame 1 to the light in frame 2, frame 3 and so on. To do this, I opened the video in ImageJ and used a tool called Z Project to make a single image.

                        The camera was left perfectly still during the brief video, sitting on a chair instead of in my moving hand. This turned out to be important: the images aligned well in the final projected image, but they would not have if the camera was moving.

                        You’ll see a number of ways to open files of different formats under FILE / IMPORT, but there’s an easier way: most files can be opened by just dragging and dropping them onto the row of tool icons. Before you open a video file, though, you may want to use a video editing program to trim your video to a short length. ImageJ can handle a video that has a few hundred images, but cramming in longer videos can slow or crash the program. As you open your video, you’ll be offered an option to convert it to 8-bit grayscale or RGB, and to use “virtual stack” rather than loading your whole video into memory. Doing these things will make steps that follow go more quickly.

                        iPhones unfortunately save movies in a format that ImageJ can’t use, so you’ll need to convert those to sequences of individual photos first using a tool like the Video 2 Photo iPhone app, or Quicktime Pro. Then you can open your folder of photos by clicking FILE / IMPORT / IMAGE SEQUENCE.

                        Once your video is opened in ImageJ, click on it to select it. You can scroll through it using the < and > keys. Then click IMAGE / STACKS / Z PROJECT and choose MAX INTENSITY to sum the brightest pixels from each frame onto a single image.

                        The montage at the top of this example was made with an ImageJ tool too. Click on your video again and then click IMAGE / STACKS / MAKE MONTAGE to make a montage.

                        3. Projecting a stack with a dark subject

                        This image was made in the similar way as the last one except here the subject was darker than the background. The video used as a source here was taken from a camera left still, pointing at the sky as birds flew over. Then a short 2-second segment of the video, about 30 frames long, was projected in ImageJ.

                        Open your video by either dragging a video onto the ImageJ/Fiji toolbar or by opening an image sequence using FILE / IMPORT / IMAGE SEQUENCE. Then click on your video window to select it, click IMAGE / STACKS / Z PROJECT and choose MIN INTENSITY (instead of MAX this time) to add up the darkest pixels from each frame.

                        4. Highlighting objects that move

                        We made a time lapse video of Mexican jumping beans moving, and then tried to highlight the moving beans’ paths. A few ways of displaying the paths are shown in the figure below. The upper right image shows the same trick we used in the preceding example. In the lower left is the same trick but choosing “STANDARD DEVIATION” in place of “MIN INTENSITY”. This trick results in moving objects that pause for a long time appearing the brightest. Combining the resulting images using Image Calculator (as in example 1 above) can also give interesting effects.

                        Below, I describe how to simply subtract from each frame everything that was also on the previous frame, so that pixels that don’t change over time appear black, and those that do change over time appear in lighter shades (the lower right image).

                        Close any windows you have open in Fiji. Then open your video by either dragging a video onto the ImageJ/Fiji toolbar or by opening an image sequence using FILE / IMPORT / IMAGE SEQUENCE. When the option pops up to “convert to 8-bit grayscale”, click the box to accept this. I recommend not choosing the virtual stack option this time. Then click on the video to select it, and make a second video showing only the moving objects by clicking COOKBOOK / T-FUNCTIONS / DELTA F UP (“Cookbook” is the set of plugins that you installed into Fiji by following the instructions in the “Get started” section above).

                        5. Color coding time

                        My kids and I made a time-lapse film of the stars passing over our yard from dusk to dawn. (To make the film, we used a Canon point and shoot camera hacked with open source CHDK software to do long-exposure time-lapse recording. The NightCap iPhone app is also okay at doing this using a smartphone). Then we used ImageJ to make a black and white film of only the moving objects. Then the film was time-coded with color. Lastly, all of the colored frames were projected on top of each other onto a single image. Color equals time in the final image: purple is just after sunset, and orange/yellow is later, just before sunrise. For example, the clouds appeared just before sunrise, so they appear orange. The stars were going by all night long, so they’re in multiple colors.

                        This example used multiple plugins, but it really took only a few clicks in ImageJ. Open your video by either dragging a video onto the ImageJ/Fiji toolbar or by opening an image sequence using FILE / IMPORT / IMAGE SEQUENCE. When the option pops up to “convert to 8-bit grayscale”, click the box to accept this. Then click on the video to select it, and make a second video showing only the moving objects by clicking COOKBOOK / T-FUNCTIONS / DELTA F UP. Then, to color code time, click COOKBOOK / Z-FUNCTIONS / Z CODE STACK. You can select from a number of color schemes. Now click IMAGE / STACKS / Z PROJECT… and choose MAX INTENSITY to add up the brightest pixels from each frame.

                        6. Kymographs

                        My sons and I found a robin’s nest on our house, and we were fascinated to have a peek. So we set up a webcam and watched the nest. On one day, we made an all-day time-lapse recording. The mother sat on the eggs throughout the day, leaving periodically for food. We were curious if there were any patterns to the timing of her trips, so we made a kymograph — a graph displaying specific positions over time. Here, the positions chosen were under a thin line that crossed the eggs.

                        In the kymograph at bottom, time is marked along the top, running from morning at the left to evening at the right. When the blue of the eggs is visible in the vertical stripes, the mother bird was out of the nest. We had read online that robins never leave their nests for more than 10-15 minutes at a time, but it looks as if she took a long lunch around 11:40-noon. It got dark just before 7pm.

                        Open your video by either dragging a video onto the ImageJ/Fiji toolbar or by opening an image sequence using FILE / IMPORT / IMAGE SEQUENCE. Then select the straight line tool from the row of tool icons, and click and drag to draw a line over an area of interest in the video. Click IMAGE / STACKS / RESLICE to see what happened under that line over time. If you want to sample more than just a thin line, then instead of using the straight line tool, select the rectangle tool from the row of tool icons, and click and drag to draw a rectangle over an area of interest in the video. Click IMAGE / STACKS / RESLICE, and then click IMAGE / STACK / Z-PROJECT. Your kymograph will appear vertically, with time running from top to bottom.

                        7. Take your pixels out of ImageJ and treat them to your own math

                        You can extract the pixel values from your image and try transforming them yourself if you’re handy with a spreadsheet program like Microsoft Excel. It’s interesting to see how features like local contrast can be highlighted this way. To illustrate this, I’ve used a 50吮 pixel image of an acorn below (A), generating several versions of the original image using simple formulas in Excel. B is a grayscale version of A. In C, the image has been blurred by averaging each pixel’s value with that of its neighbors. In D, areas of high contrast are highlighted after comparing each pixel’s value with its neighbors. In E, just the areas of high contrast are shown. In F, the degree of contrast in each area is converted to continuous grayscale. And in G, the grayscales in F were converted to colors using a lookup table in ImageJ.

                        I recommend that you try this on a tiny image, about 50吮 pixels, because some steps below will work only very slowly on larger images in Excel or other spreadsheet programs. Then convert your image to grayscale by clicking IMAGE / TYPE / 8_BIT. Then save the image using FILE / SAVE AS / TEXT IMAGE. This produces a text file that you can then open using a spreadsheet program like Microsoft Excel.

                        In the spreadsheet, you’ll see an array of numbers, each representing the pixel value of an individual pixel in a grayscale version of your image. I’ve used a 50吮 pixel image of an acorn (A) in the example below. I opened it in Excel as described above, tried out some things to transform that array of numbers, and then to see what image the new numbers would produce, I copied those cells back into a text editor (I used TextWrangler for Mac, but loads of others are available). I saved that file with a filename followed by .txt, and opened that file in ImageJ by clicking FILE / IMPORT / TEXT IMAGE. A grayscale image of the acorn resulted (B). Then I went back to the Excel file and tried playing around with the set of numbers. An Excel file containing each of the transformations in C-F is available here. If you’d like to color a grayscale image, you can open it in ImageJ, and click IMAGE / LOOKUP TABLES and pick a color scheme.

                        Figure 7: Image of an acorn transformed in several different ways

                        Experiment and have some fun

                        Once you’ve downloaded the program, it can be interesting to open a photo or import a video that you’ve taken and start clicking away to see what various buttons do. For those who prefer a more systematic start, there’s are guides available here or here.

                        In case you ever want to try more plugins, here’s a link to a big set of plugins. One plugin that I especially like is “Running Z projector”.

                        To add any of these plugins, first download it to your computer. If you’re using a Mac and it tells you the files can’t be opened because they’re from an unknown developer, you can bypass this by opening your Mac’s System Preferences, then open Security and Privacy, and you’ll see an “Open Anyway” button. Then open Fiji, click PLUGINS, then INSTALL… and select the file you downloaded.

                        Contribute to science yourself!

                        Makers and scientists both form creative communities, and communities that could probably learn a thing or two from each other. If you can program, and you see an interesting way to display images that no existing plugins can yet do, then why not write a new plug-in? Hundreds of plugins exist already, but great, new plugins appear every year, and the best plugins probably have yet to be written. Here’s an overview on creating new plugins.

                        Who’ll make use of your plugin, and what scientific discoveries might it help propel? Biomedical research articles are increasingly found in full form online, so in the months to years after submitting a plug-in, search online for the name of your plugin to find out how it’s contributing to science.

                        Introduction to Digital Images

                        00:00:11.07 I'm Kurt Thorn from the Nikon Imaging Center at
                        00:00:13.21 UCSF, and today I'm going to talk about digital image
                        00:00:17.00 analysis. And this is part one of a two part set of lectures.
                        00:00:20.28 And in this part I'm going to talk primarily about what
                        00:00:24.05 digital images are, how you display them on your computer,
                        00:00:27.15 and how you can save them so you can work with them later.
                        00:00:31.11 So what a digital image is, essentially, is a set of measurements
                        00:00:36.22 of light intensity. So we have a camera, we've talked about cameras
                        00:00:40.07 in other lectures, or a confocal microscope or some device
                        00:00:43.24 that records light intensity at a set of points in your field
                        00:00:46.25 of view in your microscope. So you get an image like this one
                        00:00:50.12 here, where you've got a bunch of bright objects in a field.
                        00:00:55.10 And if you zoom in one of these objects, you can see the individual
                        00:00:58.03 pixels that make it up here. Some of them are bright,
                        00:01:00.28 some of them are dark. And what each of those really represents
                        00:01:03.21 is a measurement of the light intensity that was recorded
                        00:01:06.16 at that point on the camera, or at that point in the image.
                        00:01:08.19 And so if you look at the numbers that make this up,
                        00:01:11.06 you can see here this array of numbers that corresponds to
                        00:01:15.24 pixel intensities in each one of these pixels. And you can see
                        00:01:19.13 that they range from zeroes here at the edges, where it's
                        00:01:22.12 totally black, to you know, 200-255 in the center, where it's
                        00:01:26.15 very bright. And so these images are just really an array of
                        00:01:32.20 numbers which represent the light intensity at each point
                        00:01:34.22 in the field of view. And computers store numbers in binary
                        00:01:40.13 format. And in binary, you can have a large number of different
                        00:01:46.14 representations, and depending on what resolution or what
                        00:01:49.16 dynamic range you want, you can start with a different number of bits.
                        00:01:53.03 So, a single bit here at the top can represent zero or one.
                        00:01:56.16 So this would be a binary image, it's either black or white.
                        00:01:59.28 If you do 2 bits, now you can have four colors like this bar here.
                        00:02:03.24 Now you've got black and white plus two shades of gray.
                        00:02:07.08 And you can keep adding bits to get more and more shades.
                        00:02:10.25 If you do 4 bits, you get 16 shades of gray. If you do 8 bits,
                        00:02:14.17 you get 256 shades of gray. And you know, you can do 12
                        00:02:19.04 or 16, which will get you 4000 or 65000 shades of gray.
                        00:02:23.03 Computers for historical reasons, tend to store numbers
                        00:02:28.17 as bytes, and a byte is 8 bits. So these final formats here on the
                        00:02:33.11 top, or these bit depths on the top, can be represented as a
                        00:02:36.12 single byte. So one byte will store one pixel's worth of
                        00:02:39.29 information. Whereas below this line here, when you go
                        00:02:44.04 above 256 and you have more than 8 bits, now you need 2
                        00:02:46.29 bytes. So you would store these generally as 2 bytes or
                        00:02:50.06 16 bits. And so the higher the intensity resolution
                        00:02:56.06 you want, the more dynamic range you want, the more
                        00:02:58.10 space it takes to store your data. But also the more fine
                        00:03:02.24 resolution you get of different gray scales of the image.
                        00:03:05.25 Here I'm showing an image at a number of different bit depths,
                        00:03:09.10 and that the top left corner here is the original image, which is
                        00:03:12.20 an 8 bit image. It's got 256 gray scales. And then these other panels,
                        00:03:19.00 I've reduced the number of gray levels and reduced the bit depth to
                        00:03:22.19 smaller numbers. So down here, you can see what happens
                        00:03:25.16 if we make this into a 6 bit image. And we have a fewer number of
                        00:03:30.08 gray levels in the image, but it's actually very hard to see the
                        00:03:32.24 difference between these two. And that's partly because your
                        00:03:35.06 computer monitor can only show 8 bits of gray scale
                        00:03:39.02 and partly because your eye is not sensitive to very many
                        00:03:43.04 shades of gray. Your eye at best can resolve 100 or maybe
                        00:03:46.29 150 gray levels. But then when we start reducing the bit depth
                        00:03:51.09 further, if we go to a 4 bit image here, you can now start
                        00:03:53.24 to see this sort of posterization, where you get, especially in the
                        00:03:57.12 smooth gradients, large discontinuities in the intensities
                        00:04:00.20 because we don't have enough gray levels to make a smooth
                        00:04:02.19 shading. And in 2 bits here, we see not only 4 grayscale,
                        00:04:05.25 but we get a very stylized image that doesn't bear a lot of
                        00:04:09.10 resemblance anymore to the original intensities here.
                        00:04:11.03 So the more bits you have, the more gray levels you can
                        00:04:14.17 record. And modern cameras can quite easily record
                        00:04:17.20 more gray levels than you can resolve with your eye.
                        00:04:20.02 And as I mentioned, your monitor, your computer monitors
                        00:04:24.07 are 8 bit displays. So they can only display 8 bits of gray
                        00:04:29.17 scale information, so 0 to 256. They're color, so they can display
                        00:04:33.28 you know, 0 to 256 in each of red, green or blue. But if
                        00:04:38.22 you're just looking in a single color, you only get 8 bit
                        00:04:40.29 resolution. So, before when I was showing these, you know,
                        00:04:44.23 16 bit grayscale traces on the bottom, we were scaling them such
                        00:04:49.07 that you know, the blackest pixel, the 0 pixel, was black.
                        00:04:53.04 And the brightest pixel, the 65000 pixel, was white.
                        00:04:56.20 But, what if instead we just scaled them so that the
                        00:05:00.18 actual intensity values are mapped directly onto the monitors?
                        00:05:03.23 So that 0 would be black and 255 would be white. Then you get
                        00:05:08.13 something that looks like this. You can see the small bit
                        00:05:11.03 depth images are only using the darkest gray levels on the monitor,
                        00:05:14.24 so they're very hard to resolve. The 8 bit one here is
                        00:05:18.03 obviously matched to the monitor, so we see the full
                        00:05:20.17 gamit going from black to white. And when we get to these larger
                        00:05:23.24 bit depths, the 4000 or 65000 gray levels, you now see that
                        00:05:28.10 all the dynamic range is compressed into this little tiny
                        00:05:30.17 bit at the bottom, and everything above 255 now just shows this
                        00:05:34.01 white. So this should hopefully bring up this point that
                        00:05:38.25 if you're looking at images that have a different bit depth than your
                        00:05:41.26 monitor, you need to do some kind of matching of the gray levels of
                        00:05:45.21 the image to the gray levels of your monitor. Or you're going
                        00:05:49.06 to either compress everything to either the dark shades or
                        00:05:52.02 saturate a tremendous amount in white, unless you happen to be
                        00:05:55.05 working with an 8 bit image. So you always have to think about
                        00:06:00.00 how you're going to do this mapping. And generally,
                        00:06:02.14 this is called a lookup table in most image analysis packages.
                        00:06:06.04 And the idea here is, you have the digital numbers that make up
                        00:06:09.18 your image on the bottom axis here, and on the side axis
                        00:06:13.18 here, you have the display intensity of your monitor. So here
                        00:06:17.09 we can take an 8 bit image, and we can control how we
                        00:06:20.10 map the intensities of the image to the intensities on the
                        00:06:22.25 display. And here we're just doing the obvious thing of
                        00:06:25.12 mapping with a straight line that goes from 0 to 255.
                        00:06:29.21 If instead, we use a much steeper line that now goes from
                        00:06:34.13 0 to maybe say 50, so we're saturating everything above
                        00:06:37.21 50 in white, you can see now the image gets very
                        00:06:39.22 bright and washed out. So this is for an 8 bit image, you can
                        00:06:45.02 equally image how you would do this if this bottom scale
                        00:06:48.03 instead of going from 0 to 255, went from 0 to 4000.
                        00:06:51.02 Or 0 to 65000. And for preparing images for presentation,
                        00:06:58.16 so say you're putting together a Powerpoint lecture like this one,
                        00:07:01.19 or preparing figures for a journal or publication elsewhere,
                        00:07:05.21 almost always you need 8 bit files there. Because computer
                        00:07:09.05 screens are 8 bit, and publishing workflow is designed around
                        00:07:12.09 8 bit files. And so if you have some image that has some larger
                        00:07:16.12 bit depth than 8 bit, if you have those 12 bit images that go from
                        00:07:19.01 0 to 4095, you need to think about how you're going to do this
                        00:07:23.01 mapping. And an important point here is that you lose information
                        00:07:26.01 in this process. Because we only have 255 values in our final
                        00:07:29.23 image, but we start with 4000 values in our initial image.
                        00:07:32.10 And so, for instance, all the values here in this mapping
                        00:07:35.12 where we're just mapping from the min to max, kind of
                        00:07:38.19 straight of line. All the values between 4080 and 4095
                        00:07:43.02 are going to end up mapped to 255 in their final image.
                        00:07:45.19 And so generally, this means you want to put off this
                        00:07:49.17 conversion as long as possible. You should do this as the last
                        00:07:51.24 step in your image analysis pipeline. If you do it too soon,
                        00:07:55.05 you lose this information, and that may come back to hurt you
                        00:07:58.23 in the end. When you're trying to bring out some detail
                        00:08:00.26 that was in the original image, but then got lost when you did
                        00:08:03.15 this conversion. So generally, you want to stay in the native bit
                        00:08:07.19 depth of your image, as long as you can, and then as the
                        00:08:10.06 final step, make an 8 bit image that you can send to the
                        00:08:12.18 publisher or put in your Powerpoint. So in this intensity
                        00:08:19.21 scaling here, in this look up table, you'll see a number of
                        00:08:23.07 terms that refer to how you can do the scaling. Right now
                        00:08:27.11 we're just talking about linear scaling, so you can define the
                        00:08:29.17 line as either the min and the max, that's how I've been talking about it.
                        00:08:33.09 But you'll often also see terms of contrast and brightness,
                        00:08:37.10 and so contrast refers to the slope of this line, so if you
                        00:08:40.09 make this line steeper, you'll get higher contrast. If you make
                        00:08:43.26 it shallower, you get lower contrast. And then brightness
                        00:08:47.26 is then the offset of this line along this plane in here.
                        00:08:51.21 So you'll see most software packages will let you set this
                        00:08:57.06 either by setting the min or the max or by setting the contrast
                        00:08:59.06 and brightness. And so here again is just this image, and
                        00:09:04.21 showing the effects of this brightness and contrast adjustment. And
                        00:09:07.25 now in addition to the image, we're showing the mapping down here
                        00:09:11.03 with this line. And also in this grayscale here, the pixel
                        00:09:15.14 intensity histogram. And so what this is, is just going through this
                        00:09:19.02 whole image and looking at every pixel and counting up
                        00:09:21.18 how many pixels there are at a given gray level. And so you
                        00:09:25.10 can see in this image, there is a big peak down here in the dark
                        00:09:28.20 pixels, that corresponds to the ground, and then another big peak here of bright pixels that corresponds to the sky. And then
                        00:09:34.15 a smaller number of pixels of the stuff in between them.
                        00:09:38.22 And so you can see here, that in fact, this image doesn't
                        00:09:41.13 have anything that is completely at 255 or at 0.
                        00:09:44.02 That the ground is not totally black and the sky is not
                        00:09:47.15 totally white. And so, we can then adjust this scaling here
                        00:09:51.01 so that we're scaling just between the minimum value of the image
                        00:09:53.29 and the maximum value in the image. And you can see this makes
                        00:09:56.16 the ground totally black, and the sky in the brightest spots totally white.
                        00:09:59.10 And so this is called brightness/contrast adjustment, and
                        00:10:03.14 this is kind of a fundamental tool you'll need to adjust any time
                        00:10:07.19 you're looking at an image on a screen. And particularly if
                        00:10:10.17 you're looking at an image that has a bit depth that's
                        00:10:12.15 different from the screen you're using to display it on.
                        00:10:13.27 So, a corollary of this is that you want to be careful how your
                        00:10:20.11 software is scaling your image. So many software packages
                        00:10:23.16 will scale in different ways. And the two common defaults
                        00:10:27.12 are that it'll either scale to the full range, so here's a
                        00:10:29.29 16 bit image that goes from 0 to 65000. And the scaling
                        00:10:34.15 here is set that it's just mapping 0 to 0 on the monitor.
                        00:10:39.18 And 65000 to 255 on the monitor. And so you can see this image is
                        00:10:44.12 really dark and murky, because in fact, there aren't any pixels
                        00:10:46.20 here that are at the maximum intensity value of the 16 bit
                        00:10:49.07 format. There are no 65000 valued pixels. So an alternative
                        00:10:55.14 way to scale this is sort of how we did that image on the previous
                        00:10:58.16 screen, which is just to scale from the min to the max.
                        00:11:00.24 And so if we do that, now we scale between you know,
                        00:11:04.15 whatever that number is and 10000 or so, which represent the
                        00:11:07.20 darkest and brightest pixel in the image. And now you can see
                        00:11:11.15 you bring out this image much better and you can see this
                        00:11:14.08 is an image of a galaxy and some stars. And so it's very common
                        00:11:20.09 for software packages to do both of these things, either to
                        00:11:22.19 do default scaling the min and max, and the full range
                        00:11:25.26 of the data. Or to autoscale to the min and max in a particular
                        00:11:31.08 image you're looking at. And in particular, this autoscaling
                        00:11:34.18 is very nice because it makes it immediately easy to
                        00:11:37.21 see what's in your image. But if you load two different
                        00:11:40.19 images and autoscale them, they're not going to have
                        00:11:42.20 the same scaling. And so you can't directly compare
                        00:11:44.23 intensities between them by looking at them. So it's
                        00:11:48.16 important to be aware of what your software is doing.
                        00:11:50.25 Another option here is of course that we could instead of
                        00:11:55.00 scaling just to the minimum and maximum values in this
                        00:11:57.21 image, we could saturate some of the values in this image.
                        00:12:01.08 We could instead choose to scale from the minimum to
                        00:12:03.26 something below the maximum, and the result of that is
                        00:12:06.13 we set all these bright pixels here to 255 on the screen,
                        00:12:09.15 and in full white. These bright pixels here to 255. And
                        00:12:13.13 so we can't really resolve what's going on inside of those stars
                        00:12:16.01 now, but instead we can see very clearly this nice spiral
                        00:12:19.18 galaxy here, which was dimmer and not really well resolved
                        00:12:22.19 on our monitor before. Another option for scaling here is
                        00:12:28.29 to go beyond just simple linear scaling, and instead apply
                        00:12:32.01 some kind of non-linear scaling. And a very common one
                        00:12:34.27 you'll see is gamma correction. And this is basically an
                        00:12:39.20 exponential or logarithmic mapping of your data.
                        00:12:42.07 And gamma equals 1, it corresponds to linear scaling.
                        00:12:45.25 So this straight line in the middle here is the gamma equals
                        00:12:48.16 1 case. And then gamma values less than 1, basically
                        00:12:53.16 saturate things at high intensities, and really stretch out the
                        00:12:56.21 intensity differences between low intensity objects. And so you can
                        00:13:00.05 see here, that the slope of this curve is very steep for low intensities
                        00:13:03.24 on the input. And so we're making a lot of gray scale
                        00:13:08.10 distinctions between low intensity objects, but then as we get
                        00:13:11.24 to higher intensity objects, we're flattening those out
                        00:13:14.27 and not using as many gray scales to represent the really bright
                        00:13:18.05 objects. Conversely, a gamma above 1 does the opposite.
                        00:13:23.11 So it will compress all the dim stuff here to a small number of
                        00:13:27.06 gray scales, and stretch out the gray scales among the brighter
                        00:13:30.03 objects in the sample. And these are less commonly used
                        00:13:34.20 but they're very handy if you have images that have a
                        00:13:38.16 large dynamic range, where there's important differences
                        00:13:41.04 concentrated at one end of the intensity spectrum or the other.
                        00:13:44.09 And so just showing that applied to this galaxy image.
                        00:13:47.02 Here is gamma equals 1, in gamma is above 1, you can see here
                        00:13:52.04 that it really makes it hard to see the dim stuff in the object.
                        00:13:54.27 But here, gamma below 1 helps you bring out those dim
                        00:13:58.25 background pixels that make up that spiral galaxy
                        00:14:00.21 that otherwise would go missing.
                        00:14:04.17 Okay, so with all these things we can do to our images,
                        00:14:07.10 what's acceptable? What's sort of the scientific standard here
                        00:14:11.12 for what you can do in a published paper?
                        00:14:15.05 And the Journal of Cell Biology has put a lot of work into
                        00:14:17.23 this, and they have some very nice guidelines which you can
                        00:14:20.04 find at that these websites here. But to summarize them briefly,
                        00:14:24.10 the general idea is that brightness and contrast adjustments are
                        00:14:27.12 okay. So long as you do them over the whole image, and you
                        00:14:30.26 don't obscure or eliminate background. So you don't
                        00:14:33.00 want to set your zero levels so high that you suppress
                        00:14:37.27 real background signal in your images. And you also don't want
                        00:14:42.12 to treat different objects in your figure differently, or
                        00:14:45.27 different objects in the same field of view differently.
                        00:14:48.11 They require that nonlinear adjustments, like gamma
                        00:14:51.19 corrections, be disclosed. And importantly, you can't cut and
                        00:14:55.18 paste regions within an image. So you can't make a composite of
                        00:14:58.16 multiple images, where it's not obvious by white bars between the
                        00:15:01.24 images that say you're making a composite. They don't
                        00:15:03.15 want you to put things together that weren't actually together in real life.
                        00:15:08.10 And finally, and a particularly critical point is, to make any
                        00:15:12.10 real comparisons in your image, you have to treat your control
                        00:15:14.23 and experimental data identically. So you want to show them
                        00:15:17.13 using the same scaling and the same gamma corrections,
                        00:15:19.16 if you've done a gamma correction and so on.
                        00:15:21.13 Okay, so that's sort of the basics of how to display
                        00:15:27.02 grayscale images. And now I want to talk a little bit about using color.
                        00:15:29.29 So, so far we've completely restricted ourselves to using
                        00:15:34.16 black and white grayscale image displays. But there's no reason
                        00:15:39.01 you can't use color. And so here's an example of just a 2 bit
                        00:15:43.13 image. It's got four different values, 0, 1, 2, and 3.
                        00:15:47.08 Here are the pixel values, and here is what this thing looks like.
                        00:15:49.27 It's a little bright spot. And so we can display it in gray
                        00:15:54.29 scale the way that we've been doing all along right here.
                        00:15:56.25 But we can also map different gray values to different colors
                        00:16:01.16 on the display. And so shown here is a color map where we're using
                        00:16:06.09 black and shades of green to display this object. And then that
                        00:16:09.13 false colors it green. But we don't need to stop there, we can
                        00:16:12.14 use bizarro color scales like this one, which map 0 and 3,
                        00:16:16.13 the darkest and brightest thing, both to red. And you can
                        00:16:20.09 see what it looks like there. You can use, you know,
                        00:16:23.04 whatever color scale you want basically. And that sometimes
                        00:16:26.23 is a good way to bring out small differences in your image.
                        00:16:29.12 So here's an example looking at kind of a fairly uniform
                        00:16:33.04 grayscale image. This happens to be a brightness or a
                        00:16:38.03 flat field correction for a microscope, where we're just looking
                        00:16:40.29 at the uniform illumination on a microscope. And we've scaled
                        00:16:45.08 this to the min and the max in the image. So here's the
                        00:16:47.12 gray scales, they run from 7000ish to 8 or 9000 here.
                        00:16:52.05 And you can see a lot of structure in this, but that becomes a lot clearer
                        00:16:57.02 if we now false color this, rather than just in shades of
                        00:16:59.18 gray, and this color map, this heat map, goes from blue
                        00:17:02.00 for dark objects all the way up to dark red for bright objects.
                        00:17:05.17 And you can see when we do that, now we can make out
                        00:17:10.06 fine details in this image more easily. And it also makes
                        00:17:13.18 it easier to map the shades of color here to intensities
                        00:17:18.08 I'm looking up at this color scale here, just because it's easier
                        00:17:21.08 to distinguish differences in color than between
                        00:17:24.17 two gray scales. We can also use this procedure to make
                        00:17:30.06 false colored images. So here's an example of taking three
                        00:17:33.12 images. These are three color stained tissue culture cells.
                        00:17:36.16 Stained for nuclei up here, mitochondria here, and actin
                        00:17:40.16 down here. And shown on the left here is just the grayscale
                        00:17:45.00 mappings, and on the right is applying the particular color
                        00:17:47.27 scales here from these bars in the center to that image.
                        00:17:50.16 So we've mapped the nuclei in cyan, the mitochondria in yellow,
                        00:17:53.29 and the actin in this sort of magenta color. And if we now
                        00:17:59.14 put all three of these images together and overlay them,
                        00:18:01.26 we get this very pretty funky colored image.
                        00:18:07.25 And so that brings me to color images. So most microscopy
                        00:18:13.24 and especially most fluorescence microscopy, is done with
                        00:18:15.24 grayscale images. Just because our cameras are monochrome,
                        00:18:20.25 and that's usually what we detect. But color images are
                        00:18:23.12 also obviously quite important. Particularly for making figures for
                        00:18:26.23 publication, as well as for non microscopy applications or
                        00:18:31.03 microscopy applications where you're looking at H&E
                        00:18:34.09 stained specimens or other pathology type specimens,
                        00:18:36.26 where you want a true colored image of your sample.
                        00:18:39.10 So a color image is nothing more than three gray scale images,
                        00:18:43.08 one image each for the red channel, the green channel,
                        00:18:47.24 and the blue channel. And these like any other images can have
                        00:18:51.16 bit depths, more or less standard is they're either 8 or 16
                        00:18:54.27 bits per channel. So if we look at just a color image here,
                        00:18:59.27 this can be decomposed to these three images of red,
                        00:19:03.12 green, and blue. And then each one of those is really just
                        00:19:06.28 a monochrome image here with the appropriate look up table
                        00:19:11.11 applied to it. So you know, red look up table for the red,
                        00:19:14.21 green lookup table for the green, and a blue lookup table for
                        00:19:16.21 the blue. And these can be stored in a number of different
                        00:19:22.05 ways. You can either store for each particular pixel,
                        00:19:25.24 either the red value or the green value or the blue value,
                        00:19:27.14 and then for the next pixel, the red value, the green value,
                        00:19:29.11 the blue value. Or you can really just store these three separate
                        00:19:32.17 images as three planes. A red plane, which has all the red
                        00:19:36.05 pixels, a green plane that has all the green pixels,
                        00:19:38.04 and a blue plane that has all the blue pixels. You don't
                        00:19:41.22 really need to worry about this most of the time, because your
                        00:19:44.14 software should take care of the loading and properly sorting
                        00:19:47.17 out how to display the color information in a color image.
                        00:19:51.02 In microscopy, it's common to have more complicated
                        00:19:56.26 image formats than just black or white or color.
                        00:19:59.11 You often have stacks or sequences of images, say if you've
                        00:20:03.13 taken a movie. Or maybe you've taken a through focus section here
                        00:20:07.04 and a different set of z-planes here in your sample.
                        00:20:08.29 Or other variables, more colors than just red, green, and
                        00:20:13.04 blue. You could do five or six wavelengths in an image.
                        00:20:15.10 You could do multiple stage positions, say in a 96 well
                        00:20:18.22 plate. And so that would give you this 3-dimensional stack
                        00:20:22.22 here where you've got a 2D image for each of these
                        00:20:25.18 planes. And in fact, in microscopy now, it's very common
                        00:20:28.17 to do not just stacks but what people call hyperstacks or
                        00:20:32.13 multidimensional images, where maybe you've got a
                        00:20:34.28 time series. And at each timepoint, you've got z-positions,
                        00:20:37.27 and at each z-position you've got multiple colors. So you've got
                        00:20:40.05 a four dimensional image stack. And so, how to store
                        00:20:47.14 and handle all this data becomes a sort of issue.
                        00:20:49.24 And so that's what I want to talk about now, file formats
                        00:20:54.08 and how you store it and work with this data. And in particular,
                        00:20:59.00 images can get quite big. Well, not the images themselves,
                        00:21:02.22 as much as the whole data set. A very common camera used in
                        00:21:06.08 microscopy has these dimensions, it's a 1392x1040
                        00:21:10.02 pixel camera, so it's about 1.4 - 1.5 megapixels. And it
                        00:21:15.00 records 14 bit images, so you need 2 bytes to store the
                        00:21:19.08 data. So that requires 2.8 megabytes just to store a single
                        00:21:22.13 image from that camera. So now, image you're doing some
                        00:21:27.01 complicated multidimensional experiment, where you're
                        00:21:29.10 acquiring three channels. At each position, you're acquiring
                        00:21:33.13 15 images, and then you're doing this whole thing
                        00:21:36.19 for 200 time points. So at each time point, you're going
                        00:21:39.04 to record your 15 z-positions, and then at each z-position,
                        00:21:42.05 you record your 3 channels. So if you multiply this all out,
                        00:21:45.14 you've got your 2.8 megabytes times the 3 channels, times the
                        00:21:49.11 15 z-positions, times the 200 time points, and that works
                        00:21:52.08 out to a 25 gigabyte data set. And this is not a particularly
                        00:21:56.14 complicated microscopy experiment. You might often see this
                        00:22:00.16 combined now in doing multiple positions in your sample.
                        00:22:03.06 And this just gets bigger and bigger. So data sets that are
                        00:22:07.18 10, 20, or more gigabytes are not uncommon. I know
                        00:22:10.03 people who routinely record 200 gigabyte data sets in
                        00:22:13.04 an evening. And so obviously, if you're storing data sets this large
                        00:22:17.29 you need to really think about how best to store them. And
                        00:22:20.24 particularly, how to keep track of all the information of where
                        00:22:23.04 each one of these images came from. You know, this now has
                        00:22:26.20 thousands of images and you need to know the z-position
                        00:22:29.21 and time point and the channel that each image corresponds to.
                        00:22:31.21 And this also brings up this issue of compression. Because
                        00:22:37.10 you'd like to not store 25 gigabytes if you don't have to.
                        00:22:41.02 So, there are two kinds of compression used for images
                        00:22:45.13 or used for things in general. And there's lossless compression
                        00:22:50.10 and lossy compression. And knowing the difference between these is
                        00:22:53.28 really critical. So lossless compression is exactly as it sounds like,
                        00:22:58.15 it preserves all of the information in your original image.
                        00:23:01.12 It compresses your image by removing redundant information
                        00:23:04.18 that can be restored after the fact. So this means you can always
                        00:23:07.15 get back your original image, your original raw data
                        00:23:09.22 from this image, despite the fact that it's been compressed
                        00:23:12.05 and takes up less space. Most compression you're probably
                        00:23:16.20 familiar with is not lossless. But it's lossy compression.
                        00:23:20.05 And so this is the kind of compression that used for
                        00:23:22.24 sort of consumer cameras for standard imaging with
                        00:23:29.15 your iPhone or your digital camera. And in order to make
                        00:23:34.08 images smaller here, what this does is it throws out
                        00:23:37.03 image data that's not visually obvious. So it removes
                        00:23:42.14 small details in the image that you can't really see.
                        00:23:45.27 But there is real data there that's being thrown away.
                        00:23:49.16 And in fact, if you turn up the compression on one of these lossy
                        00:23:54.23 compression algorithms, eventually you start to see
                        00:23:56.16 the artifact it's introducing. But most of the time, people who
                        00:24:01.14 write these algorithms try to design it so that the way it
                        00:24:04.11 throws out information is not really obvious to your eye,
                        00:24:06.17 but it is definitely throwing out information nonetheless.
                        00:24:09.10 And that means you can't get back your raw data here.
                        00:24:11.25 Once you've gone through one of these compression algorithms
                        00:24:13.25 that's lossy, JPEG is a very common one, you've lost
                        00:24:18.15 information about your image and you can't get it back.
                        00:24:21.01 And so that means for doing scientific work, these
                        00:24:25.13 are really terrible ideas. Because you've corrupted your
                        00:24:29.26 raw data, and you may be introducing artifactual stuff in
                        00:24:34.23 there. Introducing artifacts into your data that may affect
                        00:24:39.03 your conclusions. And there's a really great write up
                        00:24:42.24 on this website here about data compression for
                        00:24:46.20 both images and video. But the take home message is for
                        00:24:50.00 any scientific work, you want to be using lossless compression
                        00:24:53.06 and not lossy compression. So I want to just talk about
                        00:24:58.13 file formats a little bit. One of the most common file formats
                        00:25:02.17 you'll see used in imaging is TIFF, it's a fairly old file
                        00:25:05.25 format. But it supports both 8 and 16 bit images, which is nice
                        00:25:10.04 because we work with both. It supports lossless compression,
                        00:25:13.22 which is also nice because we don't want to use lossy
                        00:25:16.05 compression. And it supports grayscale or RGB. The downside
                        00:25:20.04 to it is that in general, TIFF store only a single image or
                        00:25:24.08 a single color image. So that means in our example where we're
                        00:25:27.05 taking three channels and 15 z-stacks, and 200 timepoints,
                        00:25:31.20 we're going to need thousands of TIFFs to keep track of all those
                        00:25:35.16 images. And so that's not so nice. There are other file formats, there's
                        00:25:40.29 a new variant of JPEG, JPEG 2000, which you don't see
                        00:25:43.03 that much, which is lossless and quite nice. And then
                        00:25:46.29 basically to deal with this issue of keeping track of these
                        00:25:49.28 thousands of images, most of the microscopy companies
                        00:25:53.15 have come up with their own custom format, so you see
                        00:25:55.19 things like IDS and ND2, that's a Nikon format, ZVI and LSM
                        00:25:59.24 are Zeiss formats and so on. and these are generally
                        00:26:04.05 pretty nice. They're lossless and they support the full
                        00:26:06.16 bit depth of your system. Almost always those custom formats
                        00:26:10.28 will support multidimensional images, so you can save that
                        00:26:14.17 entire three channel by 15 z-slice by 200 time point data
                        00:26:20.18 set as a single file, as a single large file, but a single
                        00:26:23.27 file. And these custom formats will also often keep track of
                        00:26:28.11 the metadata of your experiment. It'll keep track of
                        00:26:30.22 like how your microscope was set up and what the z-spacing
                        00:26:34.21 was in your z-stack. And what the different channels you used were.
                        00:26:36.29 What the time interval in your timelapse was.
                        00:26:40.09 And so those are very nice options, but the downside
                        00:26:43.02 to them is that because they're custom, they're not always
                        00:26:45.14 portable. So you certainly won't be able to open these on
                        00:26:48.21 something like Photoshop, you can always open them in the
                        00:26:52.13 manufacturer's software and generally you can find plugins
                        00:26:54.29 that open them in things like ImageJ. But if you want to get them
                        00:26:58.00 into something say, a custom program for doing image analysis,
                        00:27:00.04 you'll probably have to convert them to a TIFF stack
                        00:27:03.02 first. So a whole bunch of TIFFs and then load them to
                        00:27:06.14 your program. And then there's a whole bunch of file formats
                        00:27:09.26 you probably want to avoid. And those are things like JPEG,
                        00:27:13.13 because I said it's lossy, this old GIF format, or BMP, and all
                        00:27:19.08 these formats that were designed more for ease of
                        00:27:22.13 use or computer simplicity than for really preserving data.
                        00:27:27.21 So many of these are restricted to 8 bit only, and some
                        00:27:31.25 like JPEG also use lossy compression automatically. And so
                        00:27:35.08 in general, they require corrupting your data in some way you
                        00:27:38.04 probably don't want to do. They may be fine for putting
                        00:27:41.16 pictures on your website, but you don't want to use them in your
                        00:27:44.11 scientific data processing chain. So finally I just wanted to
                        00:27:49.19 say a few words about the software tools that are out there for
                        00:27:52.15 doing image analysis and working with images. There's a really large
                        00:27:56.18 number of them, so I'm not going to go into any detail here,
                        00:27:59.22 but just to sort of summarize what's out there.
                        00:28:02.05 So there are a large number of programs that will do both
                        00:28:04.24 acquisition, meaning they'll control your microscope, and
                        00:28:07.23 do data analysis. And so there's a large number of commercial
                        00:28:11.18 ones, NIS-Elements is Nikon's software, AxioVision
                        00:28:15.00 is Zeiss's software. MetaMorph is a software package that's been
                        00:28:19.01 around a long time that's microscope independent, and will control
                        00:28:23.09 many different microscopes. Slidebook is a similar program
                        00:28:27.09 to MetaMorph, in that way. And then Micro-Manager
                        00:28:29.28 is one I want to touch on, because it's developed at UCSF.
                        00:28:32.17 And it's free and open source, and it's basically a
                        00:28:37.00 plugin that lives inside of ImageJ and will control microscopes.
                        00:28:40.14 And then you can use the ImageJ functions for data analysis.
                        00:28:43.25 So all these programs will let you both control your microscope
                        00:28:46.24 and do many different data analysis tasks. Then there's a bunch
                        00:28:51.00 of presentation tools, like Photoshop which you're probably familiar
                        00:28:54.09 with or the freeware open source version of this, Gimp.
                        00:28:57.14 Which are designed not really for doing scientific data
                        00:29:00.21 analysis, but for making attractive figures and putting
                        00:29:03.18 data together so that you can display it in a figure or
                        00:29:06.10 in a presentation. And then there's a bunch of sort of dedicated
                        00:29:11.21 image analysis packages, and these include things
                        00:29:13.24 like Matlab, which is really a general purpose programming
                        00:29:16.27 language in many ways, but it has very nice tools for doing
                        00:29:19.06 image analysis. ImageJ, which is this open source, free
                        00:29:25.07 tool that has many different image analysis routines built
                        00:29:29.06 into it that you can use for doing image analysis. Imaris
                        00:29:33.09 which is a commercial package, which is really optimized for doing
                        00:29:36.03 high end 3D visualization. CellProfiler, which is another free open source
                        00:29:41.25 tool that's optimized for doing high throughput screening
                        00:29:46.05 analysis, so analyzing thousands or tens of thousands of
                        00:29:49.01 images automatically. And there are many others I haven't
                        00:29:52.10 mentioned here, which are sometimes specialized for particular
                        00:29:55.15 tasks, or just a little less commonly used than these.
                        00:29:59.24 So that's all I'm going to say today, and I just want to thank
                        00:30:04.12 Nico Stuurman, who provided a lot of the slides and the
                        00:30:07.21 outline for this talk. Thank you.

                        Multidimensional files

                        • sequential multichannel file opens correctly using View stack with: Hyperstack
                        • sometimes simultaneous multichannel files may open with channels interleaved, and here is a workaround
                          1. need to use View stack with: Standard ImageJ and Stack order: Default (xyzct)
                          2. convert the opened stack to hyperstack, do Image > Hyperstack > Stack to hyperstack. use the default xyczt order and fill in the appropriate c, z, t value


                        ImageJ can display, edit, analyze, process, save, and print 8-bit color and grayscale, 16-bit integer, and 32-bit floating point images. It can read many image file formats, including TIFF, PNG, GIF, JPEG, BMP, DICOM, and FITS, as well as raw formats. ImageJ supports image stacks, a series of images that share a single window, and it is multithreaded, so time-consuming operations can be performed in parallel on multi-CPU hardware. ImageJ can calculate area and pixel value statistics of user-defined selections and intensity-thresholded objects. It can measure distances and angles. It can create density histograms and line profile plots. It supports standard image processing functions such as logical and arithmetical operations between images, contrast manipulation, convolution, Fourier analysis, sharpening, smoothing, edge detection, and median filtering. It does geometric transformations such as scaling, rotation, and flips. The program supports any number of images simultaneously, limited only by available memory.

                        Before the release of ImageJ in 1997, a similar freeware image analysis program known as NIH Image had been developed in Object Pascal for Macintosh computers running pre-OS X operating systems. Further development of this code continues in the form of Image SXM, a variant tailored for physical research of scanning microscope images. A Windows version – ported by Scion Corporation (now defunct), so-called Scion Image for Windows – was also developed. Both versions are still available but – in contrast to NIH Image – closed-source. [12]


                        We thank Robert Haase, Hella Hartmann, Florian Jug, Anna Klemm, and Pavel Tomancak for constructive comments on manuscript and cheat sheets. Andreas Müller contributed the EM image used in Figure 4 . Font awesome icons were used in preparing the figures. We thank our reviewers and readers for their very helpful comments that substantially improved our work.

                        This publication was supported by COST Action NEUBIAS (CA15124), funded by COST (European Cooperation in Science and Technology).