Saturday, September 28, 2013

Times they are a chang'in .... Satelite imagery at a fraction of the current market structure...

I'd say this represents a technical shift that will be disruptive to current market structures. ..

Near real-time satellite imagery coming soon

Committe on Earth Observation Satellites

The Committee on Earth Observation Satellites (CEOS) includes 53 space agencies which together operate 286 Earth observation devices. One of the things that is changing rapidly is the resolution of these satellites.  Geoeye-1 which was launched  in 2008 is capable of 41 cm resolution.  Worldview-2 launched about a year later is capable of 46 cm resolution.  The next Digital Globe satellite Worldview-3 scheduled for some time in 2014 will be capable of 31 cm resolution.  Unless the Department of Defense changes the rules, you won't be able to buy imagery at this resolution.  it is still restricted to 50 cm. 

There is a new type of satellite cluster that has the potential to be disruptive.  Two startup satellite companies have already started putting satellite constellations in space, that promise to provide more frequent revisits per day than Digital Globe satellites can provide and at a much lower cost.  

Planet Labs

In April 2013 Planet Labslaunched two demonstration satellites, “Dove 1” and “Dove 2”.  In early 2014, Planet Labs plans to launch 28 mini- Earth observing satellites at an altitide of 400 km.  The satellites will provide frequent snapshots of the planet at a resolution aof about 5 m, allowing users to track changes—from traffic jams to deforestation—in close to real time. The satellites will send their images to at least three ground stations—two in the U.S. and one in the U.K. The data will be processed and uploaded for use by customers almost immediately.
  
Skybox imaging SFO

Skybox Imaging plans to launch a constellation of 24+ satellites that will capture high-resolution imagery and the first ever HD-video of any spot on earth, multiple times per day. Skybox will capture the planet on a near real-time basis to provide a tool for addressing global challenges in areas including security, humanitarian efforts, and environmental monitoring.   In both cases it is expected that the cost of the imagery will be signficantly less than current pricing.

Sunday, September 22, 2013

360 Sony Smartly Done..

Experimental Motion API for the Sony Smart Imaging Stand

SmartIImagingStand_660x3841

The Sony Smart Imaging Stand may seem like a novelty item, but it has some genuine potential. It is essentially a motorized “tripod” for Sony Xperia devices running Android 4+ that can adjust its position to perform tasks such as making sure your face is in the shot and more. In fact, that is exactly what Sony’s first party SmileCatcher app accomplishes while recording video. Bluetooth pairing is also a breeze, as it’s initiated via NFC.

However, the built in functionality of the SmileCatcher application is not what makes the Smart Imaging Stand special. Rather, the stand is interesting because Sony has opened up its (experimental) Motion API in hopes that developers can write applications that make use of the stand’s unique functionality.

To get started developing with Motion API, download the Motion API Developer kit. More information can be found on the Sony Developers Site. Do you have any interesting applications in mind for Sony’s new Smart Imaging Stand? Let us know in the comments section below!

Photos, Photos Everywhere but no Glass yet to drink?


Briefly: Google+ photos on Google Earth, MyGlass app updates


Google Earth for Android update shows geotagged images on map

The Android app for Google Earth has been updated to show photographs taken by the user, inside a new Google+ Photos layer. Geotagged photos uploaded to Google+ can now be viewed in the app, with the images appearing as thumbnails in the locations where they were taken, with selected photos opening a full-screen slideshow for the album the photo is filed within.

MyGlass Google Glass app adds screencast remote control mode on Android

Google has updated the MyGlass companion Android app for Google Glass to include a "screencast" mode. The app will now show what is being displayed in the head-mounted display on a smartphone screen, according to Engadget. The screencast mode also acts as a remote control, allowing the user to perform the usual Glass swiping maneuvers through the phone, without needing to do it on the touch sensor on the side of the device.


Read more: http://www.electronista.com/articles/13/09/06/google.earth.for.android.update.shows.geotagged.images.on.map/#ixzz2eGTrS6gG

Hero 360




Out with Old in with the New!!






Full 360 Immersion for under $4,500

Whoops GoPro knocks over another camera... Immersive Media days numbered.

If we could just get the FAA out of the way???

Two stories - one apbout appliace to ag and another how to possible innovate on your own... soon?

UAS Mounted Spectrometers Monitor Barley and Suger Beet Crops in New Zealand

A pair of Ocean Optics miniature spectrometers – one flown on a UAS and a second deployed in a ground unit – are helping plant scientists to monitor barley and sugar beet crops by providing hyperspectral measurements.

According to the Florida-based spectroscopy company, its lightweight STS model has flown initial experiments at altitudes of up to 200 meters. The airborne unit gathers high-resolution reflectance spectra, with irradiance monitored by the ground unit.

According to team leader Andreas Burkart, from the Research Center Jülich in Germany, collecting hyperspectral data by conventional field spectroscopy is a time-consuming task and is usually restricted to easily accessible areas.

In stark contrast, the UAS-deployed spectrometer is able to deliver fast and reproducible measurements over any terrain, whether farmland, forest or marsh. By measuring various segments across a section of the New Zealand pastureland, the system has been able to assess specific plots with live vegetation.

The lightweight nature of the tool is critical: employing a CMOS light sensor, the STS spectrometer measures just 40 mm x 42 mm, and weighs only 68 grams. Despite those tiny dimensions, Ocean says that it can provide full spectral analysis with low stray light, a high signal-to-noise ratio and excellent optical resolution.

Ocean claims: “For the application described here, the researchers were able to match the performance of the STS to that of a larger, more expensive commercially available field portable spectrometer, with optical resolution of approximately 2.5 nm (full-width, half-maximum).”

The tiny spectrometer is available in two wavelength ranges, in the form of visible (350-800 nm) and near-infrared (650-1100 nm) options, with Ocean saying that it is particularly suited to high-intensity applications such as LED characterization and absorbance or transmission measurements.

Photo: Miniature STS spectrometers weighing 68 grams mounted on OctoCopter – Ocean Optics.

- See more at: http://www.uasvision.com/2013/07/10/uas-mounted-spectrometers-monitor-barley-suger-beet-crops-in-new-zealand/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+UasVision+%28UAS+VISION%29#sthash.BNNr9GTz.dpuf

Spiri Is A Programmable Quadcopter That Lets Developers Focus On Building Airborne Apps
spiri



If you’re hankering to hurry up a Half Life-style future of eye-in-the-sky scanners keeping tabs on the comings and goings of human meat-bags you’re going to need a decent quadcopter to carry your dystopic dreams. Enter Spiri, a programmable quadcopter that’s been designed as a platform for airborne app creation. It’s also autonomous, meaning you don’t have to have mad piloting skills yourself just to test whether your neighbour spy app works. And even if your neighbour gets annoyed and throws a rock at it, Spiri can take a few knocks (thanks to reinforced carbon fiber ribbon protecting its body/blades).

The Linux-based quadcopter comes stuffed with sensors, cameras, wi-fi — i.e. the sorts of things you might want to power your apps — plus cloud support and development tools. One advantage of using Spiri vs a less developer-friendly quadcopter is that devs don’t have to worry about controlling and correcting its flight (which is powered by a separate processor) — that side is taken care of, say its creators. So you can concentrate on honing your computer vision algorithms to peek into Mr Trilby’s garden shed.

Spiri’s Canada-based creators are hoping to build a community of developers around the device, so have an API and are developing an app platform for distributing apps:

Our API and library of flight primitives and other basic commands allow developers to work on top of the main chip, which runs Ubuntu Linux with ROS (Robot Operating System). This is an open source platform supported by an active community of hobbyists, engineers and scientists. We are designing a simple script-calling environment for end use, as well as a native programming environment for app development. The Spiri Applications Platform, also under development, will give developers a way to get their apps out to the wider Spiri user base.

The quadcoptor’s main processor, which will run your apps, is a 1Ghz dual-core ARM Cortex A-9, giving this gizmo roughly as much power as a mid-range Android smartphone. Airborne apps that might make sense for Spiri could include urban mapping or building maintenance use-cases, say it’s creators. But really thinking up the cool software stuff is where you guys come in.

Spiri’s makers are seeking to raise $125,000 via Kickstarter campaign to get this gizmo off the ground. One Spiri quadcopter can be yours if you pledge $520 — but the full dev kit plus Spiri package costs from $575. They’re aiming to ship to backers next April.

Thursday, September 19, 2013

Nikon Ground Imagery - The next generation


Nikon 1 AW1 touted as 14.2MP waterproof interchangeable lens camera

updated 05:18 am EDT, Thu September 19, 2013

Nikon has added a waterproof, shockproof, and freezeproof camera to its Nikon 1 range. The Nikon 1 AW1 is an interchangeable lens camera with a 14.2-megapixel CX-format CMOS sensor, which can survive in water to depths of 49 feet, can survive falls of up to 6.6 feet, and will work in temperatures as low as 14 degrees Fahrenheit.

While it is built to be used outdoors, it contains a feature set similar to the Nikon 1 J3. Using Nikon's EXPEED 3A image processing engine, it has an ISO range of between 160 and 6,400, an Advanced Hybrid Autofocus that uses its 73-point AF array to focus in 80 milliseconds, and includes continuous shooting frame rates of 15fps with AF and 60fps when focus locked. On the back is a three-inch, 921-dot display, while the Wi-Fi and GPS are accompanied by a depth gauge, an altimeter, and a compass.

Video can be recorded at 1080p, with slow motion modes at 400fps and 1,200fps also available, with on-board modes for stills include a Best Moment mode that can choose the best from 20 shots, and a Slow View that records 1.33 seconds of video for easier action shots.

Accompanying the AW1 are two new lenses that share its waterproofing and shock protection. The 1 Nikkor AW 11-27.5mm f/3.5-5.6 zoom lens and the 1 Nikkor AW 10mm f/2.8 allow the AW1 to perform its water-based photography, though it is also usable with other lenses in the Nikon 1 range.

The Nikon 1 AW1 will be shipping with the 11-27.5mm lens as a kit for $800, with the two-lens kit costing $1,000 when the ship in October, with a number of color options available.

Read more: http://www.electronista.com/articles/13/09/19/can.survive.49ft.dives.6ft.drops.temperatures.of.14.fahrenheit/#ixzz2fLWcncpo

Tuesday, September 10, 2013

3D Photo, Photosynth, and Ground Imagery


http://ruthless89.wordpress.com/web-and-interactive-design/
Ruthless Design
It must have been eight or ten years ago I attended a conference where a couple of the wizards from Media Lab within Microsoft Research briefed us on their internet very-large image management design then called SeaDragon and a second really cool digital imagery tool Photosynth.  These two integrated tools are incredibly unappreciated in my opinion.

SeaDragon has become Deep Zoom, the critical element of Microsoft's Silverlight performance for viewing images and graphics.  SeaDragon represent a sea-change (whoops) that allows you to make a gigapixel (1000 megapixels) panorama if you want, and it can be opened by a viewer on the Web or in a mobile app in just a couple of seconds. And despite Silverlight's uncertain future due to HTML5, the DeepZoom group within Bing have created an open JAVA tool set in case you wondered?

DeepZoom offers to images what Google Maps offers to a map layer.  The viewer only calls out the particular tiles necessary based on the view.  In DeepZoom more and more image detail is provided as image scale is shifted-in whereas in Maps nested layering's of increasing details are provied.  Very alike in their design although the construction and source of the tiles at scale are quite different.

But it was Photosynth that really captured my interest.  I have found the Photosynth tool one of the really incredible ideas for spatial media geo-photographers.  And it is that interest since I first saw what it could do back in 2007 that has fascinated me ever since, particularly the spatial 3D model point cloud.  And this point cloud is the magic as it can generate if from set of seemingly dis-united images having only a common inward or panographic viewshed. It is LIDAR-like at a fraction of the cost and offers not a few hundred thousand points but essentially millions via simple point-and-shooters or the ever-evolving smart-phone.


Synths are generated locally via the PhotoSynth app and uploaded to the PhotoSynth web site.  The Micorsoft desktop too will organize a set of images of a common object of interest and related them together such that they form a 3D point-cloud that stitches/related the image sets to their points of view and intersections of viewshed. Images will Synth together regardless of when or with what camera or point of focus.  Only requirement is that there is a common overlap...
"Photosynth was inspired by the breakthrough research on Photo Tourism from the University of Washington and Microsoft Research. This work pioneered the use of photogrammetry to power a cinematic and immersive experience.' 
"The first style, and the one that still uniquely defines the product, is what we call a “synth”. A synth is a collection of overlapping photographs that have been automatically reconstructed into a 3D model of the space. The synth-ing process solves the same problem our brains are confronted by when we look at the world: the slight differences between what our left and right eyes see gives us cues about how far away different parts of the scene are. In a similar way, the differences between two photos taken from nearby positions can be analyzed to determine which parts of the scene are close, and which are further away. Amazingly, the synth-ing algorithm can reconstruct a scene of 200 photos in just five or ten minutes on an average laptop."
My interest in "Synth-ing" has been in its application to creating a point cloud from geotagged images. This quest led me to Vexcel, a Microsoft company located in Boulder, Colorado, that was tapped to continue working on the science of PhotoSynth-ing.  Vexcel has gone through a number of iterations of purpose and are now some sort of a black-box.  But at one time they were working on the extension of the Photosynth concept to be a spatialist's tool named GeoSynth.  Red Hen worked with them for a bit on their use of spatial motion video on the promise Red Hen might be able to apply the stand-alone tool for spatial ground imagery users via a digital mapping reconnaissance toolkit. We were excited as GeoSynth evolved to include a multipurpose design that had great potential.
  • Drive by a set of buildings collecting a fast series of high-resolution photos, and process the photos into map-accessible products in minutes.
  • Photograph a room from various perspectives, magnifications, and times of day. Navigate the synthesized collection spatial for situational awareness or mission training.
  • Quickly and automatically form a 3-D aerial view from a series of photos collected from an unmanned aerial vehicle (UAV).
  • Combine scenes from digitized video or handheld camera stills such that all images receive a full geotagging relationship... with a positional accuracy of under a meter if only a few of the image set included a reasonable GPS geotagged position.  This particular magic would allow legacy imagery scanned to digital to then become part of the synth.
GeoSynth may still reside somewhere in Bing but we still have Photosyth and a number of derivatives and innovations based on the original concept of photo tourism.  One part of the implied value of any Synth are the derrived camera orientations.  These camera orienations  can still be gotten but not in the GeoSynth's KML totally cool form - so cool the Vexcel developer soon departed after Synth-clouds began to form over Google Earth-scapes? There is a small group of extensions that can further distill PhotoSynth's hidden bits as found via The best of the homebrew Photosynth-data exporting tools.

My Uluru Synth was derrived from clipping some 80+ images from Google Earth on the maybe I could get a point cloud?  Take gander at the example live Synth just below.



Another really great example of the 3D cloud of a landscape comes from Mokojo's visit to Giza



The evolution of digital images to create 3D models continues.  There is one product from an academic group in Switzerland that created a really elegantly simple personal UAV,  Another like solution but tuned for the 3D modeling of landscapes can be found in Pix4D.  The Swiss are going after these ideas in a significant way.  Disney has a group there that has created a photo3D tool for their magical purposes as well.



AutoCAD has been interested in the image to 3D model process for some time.  Some years ago they initially offered a free tool named Photofly.  Photofly initially received some great reviews as it was supported by the "illustrious AutoCAD software, [which could] stitch standard digital photos into accurate 3D models. You don’t need a fancy camera — a point-and-shoot is more than good enough — and by leveraging Autodesk’s cloud computing cluster, you don’t even need a powerful computer to use Photofly. 3D models can be created out of faces, static objects, interior rooms, and even external architecture. Best of all, though: Photofly 2 is free." as described by Extremetech back in earily 2011.  It is now known as 123d Catch and is still free as long as you own a legit license to AutoCad.

Tgi3D PhotoScan is camera calibration solution that with some effort can create a wire frame with image textures.  Photomodeler offers a good bit of easy explanation of the photo-3D process.

3Defy is a rather neat small tool that with a single image create cha-cha 3D model.  You can also get a free solution from Cornell that has some neat results.



Indigo-i from Hover, Inc. is a hybred of SketchUp, the Google tool for 3D modeling and insertion into Google Earth now owned by Trimble.

3D Software Object Modeller Pro is a commercial product that creates a fully textured 3D model based on in-ward images.  Its a rather convoluted process that actually makes quite good models... at a price.

Hyper3D is a GeoSynth-like sort of that has evolved into a 3D modelling solution that then can be printed.  The new outfit and business model can is knwn as Cubify  I have not put really much time into its evaluation in the 3D additive manufacturing design but it sounds like a process that is likely to evolve.  To make it work properly you are required to use their viewer though?



And lastly, there is another way that is a hybred of the whole process and can create 3D renderings with a Lidar process.  ASC has created a 3D Flash LIDAR cameras operate and appear very much like 2D digital cameras. 3D focal plane arrays have rows and columns of pixels, also similar to 2D digital cameras but with the additional capability of having the 3D "depth" and intensity. the LidarCam has a 128 by 128 sensor that creates a lidar frame - no scanning required.  Why is is a neat idea is that the LidarCam can operate at up to 30 frames per second so you have essentially a lidar video and its 3D effects and accurate instantaneous measurements.  It really gets neat when a 36 megapixel color image is draped over its 16,000 instantaneous measurements.






Thursday, September 5, 2013