User Tools

Site Tools


Quick start guide to getting the sensor working. Basically, follow sections below to:

  1. Install software
  2. Force a capture to verify the sensor works
  3. Fire x-rays at sensor to test full functionality

driver here

Sensor info


x-rays are dangerous…make sure you know what you are doing. Not responsible for death, injury etc

Supported devices

Image Vendor Product Works? Notes
Gendex GXS700 small (type 1) Needs work It can probably be supported with minimal effort. Mainly needs a different firmware load
Gendex GXS700 large (type 2) Yes Primary device I develop for
Dexis Platinum Probably Test failed but DUT was likely broken
Dexis Plus 690 Probably Test failed but DUT was likely broken

I don't know if any other vendors (ex: schick) use the same hardware



  • Linux
    • Tested on Ubuntu 12.04 x64
    • May work on Windows etc but not tested / supported
    • Suggest using an Ubuntu VM (VMWare, parallels, etc) for quickest start
  • USB port (duh)
  • Gendex GXS700
    • Dexis Platinum may work but is untested
    • I have one if there is interest in getting it working
  • x-ray source: not needed for initial setup
    • Recommended: DLI WPS7 to control x-ray head
  • You DO NOT need calibration files
    • See below for details

“$ cmd” means type “cmd” into a terminal

Do (I'll improve this as people run into problems):

  1. $ sudo apt-get install -y git python-pip
  2. $ sudo pip install libusb1
  3. $ cd uvscada
  4. $ ln -s $PWD/uvscada gxs700/
  5. $ ./gxs700/
  6. Plug in your sensor to the USB port
    • Re-plug it if its already plugged in

Force capture

This step tests the sensor without actually firing x-rays at it. This is also useful for calibrating sensor defects


  1. $ python gxs700/ -f
  2. Check which you got:
    • Waiting for image: expected response. Continue below
    • “Exception: Failed to find a device”: plugged in? Did you run permission script?
  3. It should have written capture_000.png and capture_000.bin to the current directory
  4. Above will be very dark. Enhance contrast by doing histogram equalization: python gxs700/ -e capture_000.bin capture_000e.png
  5. You should see an image roughly resembling the reference above. If you do, your sensor probably works

X-ray capture


Suggest you force a capture first to verify your sensor is working

  1. $ python gxs700/
  2. Verify it says “Waiting for image”. It will spit out some dots to indicate its still polling
  3. Fire your x-ray source
  4. It should notice the x-rays and begin downloading an image
  5. It should have written capture_000.png to the current directory
    1. Currently also writes capture_000.bin but will probably remove this soon
  6. You have your first x-ray!

Diagnostic dump

Please do this and send me the result. This helps me understand the sensors better to provide better support

  1. $ python gxs700/

Send me the directory “dump” that it creates. If you are going to send me data from multiple sensors, please move the directory before re-running as it will overwrite.

Reference dumps



  • Couldn't find device
    • Did you run the udev permission script?
    • Try re-plugging it in and/or restarting Linux system
    • VM: did you connect USB to the guest?
  • Didn't detect x-rays
    • Can you turn up current higher?
      • Only recently added mA monitoring…don't know approx current I was imaging at
    • Too high or low kVp?
      • 60 kVp is a good place to start

Known issues:

  • Multiple sensors are not supported
    • Would slightly increase software complexity and no use case today for that
    • The small sensor is not supported
    • Dexis untested
    • Really, only tested against my one sensor
    • Usually can be interrupted but sometimes will fail init if it is
      • Workaround: re-plug the USB port


  • USB speed limits frame rate to something like 0.3 FPS…don't expect this to work like a video camera
  • Currently doesn't use fxload but maybe should
  • .bin file is raw output. .png is lossless compressed .bin equivalent (you can make the .bin from the .png) so .bin will probably get dropped
  • Image decoding can probably be made much faster but usually I don't care
    • For long runs (ex: CBCT) I'm waiting for sensor to cool anyway

Decoding has a “-e” option to do histogram equalization. This often brings out additional detail by re-mapping from 16 bit grayscale to shifted 8 bit grayscale. This works because humans can see 8 bits much easier than 16. TOOD: could we display usable 16 bit images by using two colors?


$ python -e capture_004.bin capture_004e.png

NOTE: future revisions may require .png input instead of .bin

x-ray trigger

I haven't messed with the default trigger settings but I suspect you could do very low x-ray triggering if desirable

x-ray fire

I use a Gendex GE-100 x-ray head. I've found these readily available for <$75 on both eBay and Craigslist. This accepts near line level AC inputs for both HV and filament making this easy to control from commodity variacs.

I control the setup via a DLI WPS7. When wired correctly:

  • Outlet 1: HV
  • Outlet 2: filament

do something like:

$ WPS7_HOST=wpsip WPS7_PASS=mypassword python

to fire an x-ray (ie when the sensor is armed. wpsip is something like

Note: you can also set these environment variables in files like .bashrc so you don't have to type them for each command

I recommend you use a hard wired ethernet line and have a way to cut power to the switch remotely if it fails. I do this by running an extension cord across the room so that it can be be pulled in event of WPS7 failure (stuck switch, etc).

TODO: use WPS7 scripting capability to make switch throws atomic


Takes raw, uncalibrated images. I've been asked a few times if I could provide calibration files for the Windows driver or could even use their calibration files on my driver. However, I'm not really interested in supporting their proprietary format.

That said, I'm not against coming up with an independent calibration scheme. I know how to take dark frames (see API) which is the key component. Uncalibrated images have been generally good enough for me so I haven't put work into this. If you are interested though shoot me a line.


I haven't played around too much with trying to maximize quality. One thing that's clear though is that the sensor should be as close as possible to the object to be x-rayed. The tube to sample distance doesn't seem to matter as much

Stitching has an example workflow using pr0ntools (panotools). At the core its using to generate transparency masked images to use the full, albeit irregular, sensor area.

uvscada/gxs700.txt · Last modified: 2016/09/07 13:03 by mcmaster