Thursday, December 11, 2014

Next Generation of Android Camera API...

Samsung Galaxy NX brings Android to interchangeable lens cameras

I guess this shouts “game on”?  It is generally known that Android as a design started as a camera control system that morphed into the open source and dominating hand-held communicator OS with digital camera as a central core feature.  Not certain exactly what this is going to offer our kit but I feel there is an opportunistic moment right in front of us.


GizMag June 22, 2013  By 

They have a second camera, the NX-1 just now available,  that does not express Android as the running OS but what it does provide is as state of the art imager and superior 4K stream and recorder as there is on the market. Its feature set has rattled SONY, Canon and Nikon… the industry stunned that Samsung’s drive into hybrid mirror-less still/video has yielded such a technical disruption.  Most importantly internally it offers some of the first 4K content generated with a H.265 compression.  H.265 is the next generation of compression technique that allows its 4K frame resolution to be distributed across the ether-net of mobile phones, home and office at the same bit-rate as 1080p and H.264 compression.  The compression requires more horsepower to negotiate the decode process and  very few viewers are now able to handle this. The H.265 is squarely in the MPEG-LA group.  Its correleant as an open source solution is a VP9 from Google’s effort to help with aggregated internet traffic. It is only recently 4K monitors have come to market.  What we are quite interested in is the extraction of the 4K frame to JPEG and then how that content can be sorted and processed. 

Camera FV-5 Updated With Support for Android 5.0 Camera API - Droid Life: A Droid Community Blog
http://www.droid-life.com/2014/12/10/camera-fv-5-updated-with-support-for-android-5-0-camera-api/

via Digg Reader

Wednesday, December 10, 2014

Precision Ag Monitoring - 5-Channel Global Shutter RedEdge(tm)

RedEdge™ is much more than just a camera, sharing more design features with Earth imaging satellites than it does with standard consumer cameras. Industrial imaging sensors provide high dynamic range in varying lighting conditions while removing artifacts commonly seen in drone video and imagery. Coupled with MicaSense Data Services, RedEdge™ provides a complete imaging, processing, and analysis system ready for integration with any platform.

Need a closer look? Unplug your RedEdge™ and go handheld. An integrated shutter button lets you take close-ups of areas of interest, and with the optional GPS module, maintain geo-tagging and time-stamping of all of your multispectral images.
  • More than just pretty pictures - simultaneous capture of five discrete spectral bands optimized for crop health data gathering.
  • Fly fast - we can keep up! Capturing all bands once per second enables faster flight speeds and lower altitude captures.
  • Distortion-free - global shutter design creates distortion-free images on every platform.
  • Single card - tired of juggling multiple cards? With RedEdge™, one SD card stores all images and data for easy data transfer.
  • Metadata included - all image files are time-stamped and geo-tagged, no third party tools or autopilot logs required.
  • Fully calibrated - leave your integrating sphere at home, we have you covered. Every camera comes fully calibrated for precise, repeatable measurements every time.
  • Stand-alone - with external GPS connections and self-triggering capability, easily collect geo-tagged data without any connections to the host vehicle.
  • Data at your fingertips - With Wi-Fi built in, preview images and change settings from your phone, tablet, and computer.
  • Fully integrated - our convenient host interfaces allow tight integration with any platform, enabling full data capture by providing GPS data and power.
  • No moving parts - solid-state design means nothing to break on those dusty days or the occasional harsh landing.
Weight190 g (6.7 oz)
Dimensions4.8” x 2.9” x 1.5” (12.2 cm x 7.4 cm x 3.8 cm)
Spectral BandsNarrowband filters for Blue, Green, Red, Red Edge, Near-Infrared
Ground Sample Distance5.4 cm/pixel (per band) at 200 ft (60 m) AGL
Capture Speed1 capture per second (all bands), 12-bit RAW

Friday, October 24, 2014

Getac T800 - My next computer? Maybe....

Getac announces the availability of the new Getac T800, fully rugged tablet with Windows 8 displays with 8.1 ” .

Getac Announces Availability of Getac T800 on the market, the new tablet from 8.1″ withWindows 8 operating systems, designed specifically to support the productivity of those who work outdoors. Relatively thin and light, Getac T800 has been designed on the basis of the specific need’s operators and technicians who work outdoors in areas such as utilities, providing field services and Public Safety.

Getac T800 offers size not comparable to the tablet Windows for the consumer market, yet is small enough to be transported even in hostile environments for most of the electronic devices. With a thickness of 24mm and a weight of 880g, Getac T800can be taken, according to the company, in a pocket or a hand, giving the user power, reliability and versatility of a typical Windows system.

With the SnapBack can also add extensions to the tablet, such as a second battery that allows you to prolong the battery life up to 16 hours. Regarding the characteristics of resistance, Getac T800 is IP65 certified and meets the requirements of the American military MIL-STD-810G: in other words, it manages to survive to shock, drop, vibration and intrusion of liquid and powder, of course within certain limits.

The processor integrated in the tablet is an Intel Pentium N3530, quad-core 2.16 GHz processor, supported by 4GB of RAM DDR3L. Given the nature of the device, Getac has decided to introduce a datasheet cell phone compatible with 4G LTE networks (optional), as well as support networks and Wi-Fi 802.11ac functionality geolocation via GPS SiRFStar IV. Noteworthy also is the monitor, according to data released by the company.


Thanks to the technologies and LumiBond QuadraClear proprietary panel 8.1″ LCD TFT 1280×800 unable to reach a value of maximum luminance of 600 nits without drastically compromising the autonomy of the device. Getac T800 is also available and can be ordered in a wide range from 64 to 128GB of storage built-in.

Getac T800 – Technical Specification

  • Intel Pentium 2.16 GHz N3530
  • Operating System Windows 7 or Windows 8
  • Tablet Dimensions: 227 x 151 x 24 mm
  • Multi-touch display 8.1 ” 1280×800 600nits
  • Weight: 880 g. 64GB / 128GB SSD
  • SiRFstarIV GPS ™
  • Rugged Full, MIL-STD-810G, IP65
  • Super fast WiFi: 802.11 ac
  • Extensive connectivity, antenna design patented 3D internally
  • Expansion with SnapBack: 2-1 player barcode and RFID Battery or optional
  • Battery life: 8 hours / 16 hours with battery SnapBack
  • Tri RF pass-through (WWAN and WLAN and GPS).

360 by 270 for under $800?




Remember 360fly? The panoramic camera from EyeSee360, which built the panoramic GoPano iPhone lens, has been on the scene since early this year at NAB. Now, the WiFi- and Bluetooth-equipped 360fly camera has been given the nod by the FCC, and an attached review manual gives us a better idea how it works. As the company showed earlier, it's a single-lens 360 degree horizontal and 240 degree vertical fisheye lens that has "the widest field of view on the market." It uses an iOS 8 or Android 4.3+ app that turns your Bluetooth LE-equipped smartphone into a 360 degree video viewfinder with full remote control.

Using a single lens and sensor eliminates the need to "stitch" video and photos -- instead, you can swipe across the video to pick the angle you want, then edit it together using the 360fly app. As for the rest of the specs: according to the FCC guide, it's waterproof to a whopping 164 feet (5 atm) with 16GB of memory, a 360 degree horizontal and 240 degree vertical f/2.5 lens, a mic and a 1,504 x 1,504 1/2.3-inch CMOS sensor. It can hold a charge for 2 hours, weighs 138g (0.3 pounds) and comes with a tilt mount, power cradle and USB port. You may have noticed that I didn't mention an SD card -- it looks like it'll make do with 16GB of internally memory only, with large transfers to your mobile device by WiFi.



Rather than tackling sports cam stalwarts like GoPro directly, the 360fly is trying to carve its own niche by capturing video from all around the photographer without the complexity of multiple lenses. EyeSee360 must have a convincing case, because it raised $17.8 million from bullish investors. Though the company promised it for this summer, it looks like its ready to at least take a step towards the market now that it's cleared the FCC. Some of the specs might be revised in the interim, but for now, the manual is marked "Reviewer's Guide," so hopefully we'll get our mitts on it soon.

Thursday, October 23, 2014

Killing off Counterfeit COM... the end is near?

Windows Update driver bricks counterfeit FTDI USB-to-serial chips

Driver changes fake FTDI chip settings, renders it unusable

A recent Windows Update is causing trouble for people working with Arduino microcontrollers and other similar projects, by making some hardware inoperable. A driver update for FTDI chips as part of the Windows Update is apparently damaging the software on some USB-to-serial components, with counterfeit chips suddenly becoming inoperable.
Due to the prevalence of FTDI serial chips in hobbyist electronics, there are now many fake chips on the market claiming to be from FTDI, and function using previous official driversHack A Day warns that the new drivers do not merely prevent the counterfeit chips from working with Windows Systems, as they change the code on the chips so that the Product ID (PID) is 0000, making them effectively unusable on any other platform
Though damaging, the issue can still be reversed, as FTDI does offer a configuration tool which can be used to change the PID. Even so, it will change back to 0000 again if it comes into contact with the newer drivers.

Ars Technica notes that the EULA for the drivers includes some new terms, advising the use of the driver with "a component that is not a Genuine FTDI Component, including without limitation counterfeit components, may irretrievably damage that component." It is unclear if the drivers are acting in a malicious way or not, as it is possible the counterfeit chips are misinterpreting commands and causing the issues, rather than it being an attempt by the company to damage the fake components. 

FTDI has yet to comment about the issue. When questioned, Microsoft refused to comment and referred Ars Technica to FTDI over the matter.

Read more: http://www.electronista.com/articles/14/10/23/driver.changes.fake.ftdi.chip.settings.renders.it.unusable/#ixzz3GzXUtkxF

Wednesday, October 15, 2014

Samsung NX1



Good to go for VMS333

Tuesday, October 14, 2014

MODESTA

MODESTA: The Evolution of System Integration

As Army tactical networks become more complex, there’s an increased need to perform system-of-systems (SoS) engineering early in the development and integration process. New technologies don’t always work seamlessly with the fielded equipment – which makes system integration and improvement a challenge. That’s why the tools used to analyze these tactical networks must also evolve.
(Graphic illustration provided by CERDEC/Released)
(Graphic illustration provided by CERDEC/Released)
Conducting SoS analysis upfront will enable stakeholders to find potential issues and implement solutions before large investments have been made. Leveraging a modeling and simulation (M&S) environment provides the most cost-effective means to do this large scale and beyond the component level.
However, many existing M&S capabilities can’t answer today’s complex network questions, such as detailed routing and latency analysis.
Moreover, the current M&S landscape is disjointed and stove piped. We need to take a conscious look, as an Army, at the cost in terms of reigning in spending and upfront coordination of capabilities across the community. It is imperative that the Acquisition and Science and Technology (S&T) communities work towards a defined end state for M&S capabilities.
To this point, we’re developing a holistic tactical modeling, simulation and emulation tool that will not only allow early SoS engineering, but will also streamline M&S across the Acquisition and S&T communities.
This will reduce redundant analysis and duplicate spending by project managers and provide significant, long-term cost savings to the Army.
The Modeling, Emulation, Simulation Tool for Analysis (MODESTA) provides a large-scale, tactical network analysis environment with a centralized framework so analysts can conduct realistic, operational scenarios with emulated and simulated systems – all while accessing centralized data models and data collection, reduction and analysis tools.
 Engineers demonstrate live hardware, a radio in this case, being emulated and tested in a lab setting while the adjoining image shows a drawing of the radio used in the field. (Graphic illustration provided by CERDEC/Released)
Engineers demonstrate live hardware, a radio in this case, being emulated and tested in a lab setting while the adjoining image shows a drawing of the radio used in the field. (Graphic illustration provided by CERDEC/Released)
MODESTA enables the utilization of live and emulated hardware (such as tactical radios or routers), which provides the user a virtual environment in which to interact with live hardware, thus providing true performance characteristics at scale while running real applications – such  as Mission Command and Fires applications.
MODESTA’s framework enables seamless interaction between live hardware and emulated systems; this allows for increased scalability with few limitations, providing a good picture of how the technology is going to interact in the full system-of-systems network/environment before the tech provider gets too far along in development.  Furthermore, you’re not pulling radios, unmanned aerial systems, or vehicles out of the field to use as training/laboratory assets.
There are several simulation/emulation environments and a variety of models being utilized by different organizations, but these are not accessible to the greater M&S community.
The MODESTA configuration management databases will compile these high fidelity models along with any past analysis associated with the models – such as what was done, by whom, in what scenario, with what traffic, yielding what results.
It will also allow analysts to replicate those scenarios while adding their own logistical and environmental variables with a few clicks of the mouse, so you don’t have to duplicate the effort of setting up your scenario every time you want to move to a new environment. This will reduce duplicative analysis while helping shape and advance future testing.
The MODESTA framework will also enable the S&T and Acquisition communities to perform cross-Program Executive Office analysis: threats, intelligence systems, Distributed Common Ground System-Army, and sensor feeds can be evaluated in conjunction with the tactical communications network.
(Graphic illustration provided by CERDEC/Released)
(Graphic illustration provided by CERDEC/Released)
Additionally, MODESTA’s modular framework will provide most of the communications infrastructure and the data collection and reduction so users can evaluate multiple types of systems – ranging from Mission Command applications to cyber defensive/offensive systems to sensors – on a scalable network.
Working under the MODESTA framework will create cost efficiencies in licensing, waveform development, server costs and maintenance, and the man hours needed to set up varying analyses.  The end state could reduce spending by as much as 80 percent.
We’re applying it to our R&D work to see how it’s going to interact with the existing PM systems; this will aid in the technology transition of CERDECtech-base work while allowing for technology progression from Technical Readiness Level 3-6 with a focus on SoS integration.
We’re also partnered with PEO C3T to build a brigade-scale, high-fidelity M&S environment where we’ll be replicating a future capability set for the Army, using high-fidelity emulation of a brigade and live hardware from the CERDEC C4ISR Systems Integration Lab (CSIL). We will have the initial capability by Jan 2015. One year after that, we hope to scale up to a division – and potentially an Army Corps.
CERDEC S&TCD has a long history of creating tactical communications models, conducting simulation and emulation analysis, and performing tactical data collection/reduction for analysis – from individual PMs to our lab-based and field-based risk reduction support of NIEs.  As a result, we know what data, tools and processes are useful for evaluating these systems.
That’s why we’re leading the charge toward an open, modular, high-fidelity M&S tool that will allow stakeholders to make the best decisions as to which new technologies get fielded and how they get fielded.
We see the potential of this as a cross-PEO tool, but we’d like to see this M&S environment be accessible across the Army S&T community to all the PMs. It will be easier for the Army to achieve a unified vision for our tactical networks if we’re all on the same page using the same analysis results to make decisions that will support our soldiers.
Written by:
Joshua Fischer, chief of the Data Collection, Analysis, Modeling and Simulation (DCAMS) branch in the CERDEC S&TCD Systems Engineering, Architecture, Modeling and Simulation (SEAMS) Division
Noah Weston, Modeling and Simulation Team Leader (acting) for CERDEC S&TCD Systems Engineering, Architecture, Modeling and Simulation (SEAMS) Division
———-
Follow Armed with Science on Facebook and Twitter!
———-
Disclaimer: The appearance of hyperlinks does not constitute endorsement by the Department of Defense of this website or the information, products or services contained therein. For other than authorized activities such as military exchanges and Morale, Welfare and Recreation sites, the Department of Defense does not exercise any editorial control over the information you may find at these locations. Such links are provided consistent with the stated purpose of this DOD website.

Friday, October 10, 2014

Serious Stuff - Google to tackle Ellison and Oracle Java Claims...

Google asks Supreme Court to weigh in over Java API use in Android

By Electronista Staff

The battle between Google and Oracle could be heating up again in the near future, as the search giant has petitioned the US Supreme Court to review the case for a final ruling. Previously, the US Court of Appeals for the Federal District overturned a lower court ruling that found Google didn't infringe upon Oracle copyright by using pieces of open-source Java APIs in Android without a license.
The legal battle dates back to 2011, when Oracle accused Google of copying aspects of 37 Java APIs in Android. Oracle was seeking more than $1 billion in damages for Google's use of code elements including headers, names and declarations. US District Judge William Alsup found in 2012 that Google wasn't guilty of infringement as APIs weren't subject to copyright protection.

Appeals court Judge Kathleen O'Malley believed otherwise, stating that "structure, sequence, and organization of the 37 Java API packages at issues are entitled to copyright protection." The decision to overturn in Oracle's favor would send waves through the tech industry, as Oracle considered it a win for the software industry. Google thought it set a dangerous precedent.

Google believes that the ruling from the US Court of Appeals would allow for "copyright monopolies" over some of basic computer programming and design elements. The company draws parallels to Remington's development of the QWERTY typewriter keyboard design. Had the appeals court applied the same logic to the invention as it did in the Oracle case, Remington could have "monopolized not only the sale of its patented typewriters for the length of a patent term, but also the sale of all keyboards for nearly a century."

Google says Remington wouldn't have been able to win a copyright infringement lawsuit against companies like IBM and Apple for their additions to the keyboard layout. Users expected companies to use the QWERTY design, it argues, after investing time to learn it. Remington wouldn't be "entitled to appropriate the investments" others made in learning how to use the keyboard. Google suggest that the Java API it used are important for function, but not a creative work that would be granted protection. Those APIs were added to Android because programmers were familiar with them, learned the elements and expected to use them.

"As relevant here, a person writing an Android application in the Java language may use shorthand commands to cause a computer to perform certain functions, such as choosing the larger of two numbers," said Google. "Programmers have made significant investments in learning these commands; they are, in effect, the basic vocabulary words of the Java language. When programmers sit down to write applications, they expect to be able to use them."

Google started the process of filing the petition back in July, asking for an extension to file until October 6. Even though the petition was accepted this month, the Supreme Court has until November 7 to respond.

Read more: http://www.electronista.com/articles/14/10/09/google.asks.supreme.court.to.weigh.in.over.java.api.use.in.android/#ixzz3FkKXlgtc

Sunday, October 5, 2014

GoPro gets some GO - All Brightsky

A bit of news -  The newly listed GoPro stock was partially gifted to get around some of the rules for cashing out. New stock-holders are potentially in process of going after BOD and once founders and owners need to get their value out?  


A few months ago, we told you about BrightSky Labs, a startup that hoped to unlock videos recorded on GoPro cameras and other wearable devices and make them easy to edit and share. Today, the company is releasing the first version of its video-sharing app 10, which is designed to do just that.


The 10 app was created to reduce the friction GoPro users currently have when finding and editing videos to share. Currently, anyone who attaches a GoPro to their snowboard, surfboard or any other device usually ends up waiting until they get home and upload videos to their computers before being able to access them. Then they have to go through the trouble of sorting through all the content they recorded for just the choice bits and cut them down before uploading them to YouTube or other networks.

BrightSky Labs hopes to simplify that process, which they believe will make for a lot more shareable and shared GoPro content making its way online. The 10 app makes it simple for extreme sports enthusiasts to finish recording, check out the videos they’ve recorded, and get right back to the slopes or the surf, or whatever it is they’re being extreme on.

The app connects directly to a user’s GoPro camera via Wi-Fi, enabling it to capture video as it’s being recorded, or to access pre-recorded videos that are already saved on those devices. The magic of 10 comes from an algorithm that quickly helps users discover the most interesting snippets from their recordings, and to quickly cut them into shareable bits.


The app’s editing bay has a two-axis slider that makes it easy for users to scroll to a shareable section of video by sliding the cursor left or right, and also to change the length of the bit they want to share by sliding up or down. It also is able to recognize where a video is shot and suggest filters that users might want to overlay on the video.

Users can also add music or narration to their videos, either from their phone’s microphone or from a selection of licensed tracks that are available in the 10 app. Once that’s all done, it’s time for users to share their videos to networks like YouTube, Facebook, Vine, and WhatsApp.

According to co-founder Ian McCarthy, BrightSky Labs picked the name “10” for the app because the team wanted its brand “to be about the enjoyment from using the tech, not focused on the tech itself.” After testing with users, the company found the name resonated with users and how they felt about their adventures. “We heard pretty unanimously that it means for them “the best,” McCarthy wrote in an email.

Saturday, September 27, 2014

Out with Old - In with the Printer

3D Systems Just Broke the Speed Barrier, Surpassing Traditional Injection Molding Manufacturing Techniques

You would have to be living under a rock if you haven’t heard about 3D printing and additive manufacturing yet. The media has been all over the up and coming technology, while businesses, individuals, and even music groups 3dswho want attention, just have to find a way to mention or utilize the technology in some way or another. Despite all the attention, and all the predictions of a changed world as a result of this technology, there are still many skeptics.


For every researcher, scientist, or person working in the additive manufacturing field, who says that its the future of manufacturing, there is at least one individual to counter their claim. “3D printing is not fast enough.” “It will never be used for mass production.” “Injection molding will never be replaced by additive manufacturing on a mass scale.”

These are all opinions I have personally heard uttered from the mouths of some very educated, respected individuals. Although, at the time I may have wanted to jump in and call them out on their ignorance, I respected their opinions and instead decided to sit back and wait a few years as the technology progressed enough to prove them wrong.

Here we are, not even halfway through 2014, and 3D Systems may have already delivered a near knockout blow to the 3D printing skeptics out there. Today the company, who invented much of the technology behind 3D printing, announced a major breakthrough. For the first time ever, they have shown that their fab-grade 3D printers have matched and exceeded the productivity and speed of traditional injection molding, in creating functional parts.


“Our unwavering commitment to customer success through innovation has literally broken the mold this time – challenging the myth that 3D printing can’t match the productivity of injection molding,” said Cathy Lewis, 3DS’ CMO. “This is just the beginning. We are working on additional applications that defy traditional manufacturing constraints, allowing our customers to go from idea to product in hours, instead of months – to truly manufacture the future.”

What’s even more amazing is that we are only at the starting gate in the development of high speed, extremely productive machines, capable of mass producing both indistinguishable, as well as customized parts. In fact, 3D systems also pointed out that we are in the midst of a Moore’s law-like exponential progression when it comes to the speed of these printers. On average, over the last ten years, the capabilities of 3D printers have doubled every 18 months, according to the company.



Mass Maufacturing via SLA 3D Printing

3D Systems gave an example of their success. Recently they printed out 2,400 tiny lamp shades on one of their stereolithographic printers. It took a total of 20 hours to print. This equates to approximately 30 seconds per part. All the while, there was no need for tooling or lengthy supply chains.

Clearly we have just entered a realm in which additive manufacturing has proven to be just as, if not more effective than traditional injection molding techniques for many mass manufacturing cases. The next several years, or perhaps sooner, should prove that injection molding is not the path to future manufacturing, but 3D printing is. Let us know your opinions on this announcement in the ‘3D printing breakthrough‘ forum thread at 3DPB.com. Check out the video just released by the company, discussing this impressive breakthrough in additive manufacturing.

GAS Anadarko

Bridging the generation gap for spatially-enabled asset management

DSC03331ab

I've blogged many times about the aging workforce challenge facing many industries.  At the SPAR International conference Wayne Rodieck of Anadarko Petroleum outlined what this means to the oil and gas industry and how he is using IT to bridge the generation gap.

The International Energy Agency (IEA) is projecting that by 2020 the U.S. will surpass Saudi Arabia as the world's largest oil producer.  US oil and gas production, driven by technologies that are unlocking light tight oil and shale  gas resources, is rising dramatically.  Since 2008 the U.S. oil and gas industry has increased production by 25%.  This expansion has created 1.7 million new jobs.

DSC03328ab

Most of these have been filled by young workers with no or little experience in oil and gas, but better acquainted with IT technology than the experienced workers who are on the verge of retirement.  As with other industries oil and gas is faced with the challenge of knowledge transfer to enable younger workers to be as productive as possible and to avoid a massive decline in productivity as the older generation retires.

Dr Apostol Panayotov of UC Denver described an online spatially-enabled asset information system at Anadarko that is designed to be accessible to both generations.

  DSC03330ab

It provides a web-based user interface designed to serve up information about equipment on oil and gas facility sites including visual photographs, facility attributes and gelocation.  It is based around point clouds captured by scanning valves, pumping stations, and other oil and gas pipeline infrastructure, but it hides the point cloud behind an intuitive interface that relies on smart digital photographs of facilities.  Clicking on a particular piece of equipment such as a valve, tank, or catwalk in a digital photograph links the user to information about that piece of equipment including geo-location and dimensions, information that is derived from a point cloud.

Wayne described a simple use case which explains what he sees as the critical advantage of this approach.  There is an emergency and at 2:30 am he has to send a young, inexperienced worker out to a site where there are a 120 valves to turn off one of them.  He feels confident that by providing the young worker access to this intuitive online system that he will turn off the right one.

Sunday, September 21, 2014

What the PDF ...?


Geographic Imager is software for Adobe Photoshop that leverages the superior image editing capabilities of the world’s premier raster-based image editing software and transforms it into a powerful geospatial production tool. Work with satellite imagery, aerial photography, orthophotos, and DEMs in GeoTIFF and other major GIS image formats using Adobe Photoshop features such as transparencies, filters, and image adjustments while maintaining georeferencing and support for hundreds of coordinate systems and projections.

Geographic Imager 4.5 is immediately available and free of charge to all Geographic Imager Maintenance Program members and at US$319 for non-maintenance upgrades. New fixed licenses start at US$699. Geographic Imager Basic licenses start at US$199. Academic, floating and volume license pricing are also available. Geographic Imager 4.5 is compatible with Adobe Photoshop CS5, CS5.1, CS6, CC and CC 2014.


The PDF Maps app is a geospatial PDF, GeoPDF® and GeoTIFF reader for your Apple iOS and Android smartphones and tablets. Easily search for and browse thousands of professionally made maps available in the Avenza Map Store. Interact with spatially referenced maps to view your location, record GPS tracks, add placemarks, and find places.

Exporting an ArcGIS Map for Use in the PDF Maps App from SmithGIS on Vimeo.


For the geospatial community, PDF Maps complimentsMAPublisher and Geographic Imager, both of which have the ability to export to geospatial PDF and GeoTIFF. The formats are also supported by common GIS applications including ArcGIS.

PDF Maps is used by inviduals*, companies and organizations for navigation, information collection, and sharing geographic information and knowledge.

Tuesday, September 16, 2014

Skin Cancer ... App for that



You can already use your smartphone to do things like hail a pimp ride home or order an artisan pizza, obviously the next step is cancer detection, right? Researchers at the University of Houston think they've created a smartphone app that can detect melanoma even better than your doctor. Called DermoScan, the app works by taking a photo of your odd-shaped mole, and then analyzing it to determine if it might be cancerous. Initial testing found that DermoScan was able to identify skin cancer roughly 85 percent of the time, making it just as effective as visiting a dermatologist and even better at diagnosing melanoma than the average primary care physician. Don't head over to the app store just yet. The app requires a special $500 magnifying glass to make the magic happen -- not exactly more cost effective than a trip to the old MD.It might not make sense for the average American to shell out five bills for the necessary equipment to use DermoScan, but the app could be big news for developing countries and rural areas where there isn't a primary physician for people to see. One device could potentially diagnose an entire village. Paired with something like Wello's tricorder-esque iPhone case, an 5S could become a powerful tool in helping an entire town determine if they need to travel to see a doctor -- all for less than your average trip to the ER.

Samsung takes the 4K Production Camera Lead






Samsung has introduced the pro-oriented NX1 mirrorless camera for Photokina 2014, boasting the first-ever APS-C sized BSI-CMOS sensor. The 28.2MP NX1 also offers a sophisticated hybrid autofocus system with 205 phase-detect points covering 90% of the frame, and the camera is weather-sealed to resist the elements.

The list of high-end specifications reels on: the NX1 is capable of recording 4K video, offers 15 fps burst shooting with continuous autofocus, and provides a 2.36M dot OLED EVF. Being a Samsung camera, it offers advanced wireless features including built-in 802.11ac Wi-Fi and Bluetooth connectivity. The connectivity power can auto-shift from bluetooth pairing to smartphone Wi-Fi allows for camera settings selection, image monitoring, and other real-time controls for zoom. Camera has 6df orientation awareness but not GPS positioning unless when paired to an appropriate smartphone.

The Samsung NX1 will be offered body-only for $1499.99, or as a 'premium kit' with 16-50mm F2-2.8 lens, battery grip, and extra battery for $2799.99.

Convergent Design Slips... Hold-on there Cowboy.

Odessy 7Q - Someday the Four-channel 1080p Concurrent Recorder the disappointment




Shogun - more a proxy and monitor no microphone



Unlike the Convergent Design 7Q which still does not accept 4K from the A7S and uses proprietary SSDs the Shogun makes use of standard 2.5″ SSD drives. They also had a caddy for CFast cards (which the Ninja Star uses) so you can use those on the Shogun as well. 

http://www.newsshooter.com/2014/08/19/convergent-design-odyssey-7-and-7q-get-another-new-firmware-update-adds-1080p-50-and-59-94-prores-recording/

Sunday, September 14, 2014

Egaget and Sony Action Cams... with GPS

Sony Action Cams are ready to stream live internet video


Sony Action Cam AS100V


Sony Action Cam owners: if you're eager to share your sporting adventures with the world, your moment has come. The company has just rolled out a firmware update for the AS100V (installable on Macs or Windows) that lets you broadcast live video on Ustream, complete with social network alerts when you're on the air. The higher-end camera also gets a new Motion Shot Mode that composites several photos into one, while burst shooting and self-timer modes are useful for both action-packed images and self-portraits.

You won't get live streaming or high-speed photography if you're using the more modest AS30V cam, but you're not out of luck. It's getting its own upgrade (available on Macs and Windows) that delivers multi-camera control through an optional remote, better automatic exposure and the use of WiFi without a memory card. Hit the source links if you're ready to expand your cinematic repertoire.

Android Cameras ... Building Google Earth Fly-throughs



Android founder: We aimed to make a camera OS
The creators of Android originally dreamed it would be used to create a world of "smart cameras" that connected to PCs, a founder said, but it was reworked for mobile handsets as the smartphone market began to explode.
"The exact same platform, the exact same operating system we built for cameras, that became Android for cellphones," said Android co-founder Andy Rubin, who spoke at an economic summit in Tokyo.

Rubin, who became a Google executive after the search giant acquired Android in August 2005, said the plan was to create a camera platform with a cloud portion for storing photos online.

He showed slides from his original pitch to investors in April 2004, including one with a camera connected "wired or wireless" to a home computer, which then linked to an "Android Datacenter."

But growth in digital cameras was gradually slowing as the technology became mainstream. Rubin's company revamped its business plan: A pitch from five months later declares it to be an "open-source handset solution."

Android kept its software core, including its Java core. The operating system's use of Java is at the heart of an ongoing multi-billion dollar lawsuit filed against Google by Oracle, around which an eight-week jury trial has just begun.

Back in 2005, the company added team members who had experience at companies like T-Mobile and Orange, and began to target rivals like mobile versions of Windows. Apple didn't enter the market until 2007.



 IDGNSAndroid co-founder Andy Rubin, speaking in Tokyo.

"We decided digital cameras wasn't actually a big enough market," said Rubin. "I was worried about Microsoft and I was worried about Symbian, I wasn't worried about iPhone yet."

Rubin said there was an opportunity at the time because even as hardware costs fell steeply due to commoditization, software vendors were charging the same amount for their operating systems, taking up an ever larger part of manufacturers' budgets. As Android considered its product to be a platform for selling other services and products, the company aimed for growth, not per-unit income.

"We wanted as many cellphones to use Android as possible. So instead of charging $99, or $59, or $69, to Android, we gave it away for free, because we knew the industry was price sensitive," he said.

Handsets worked out better than cameras. An original "ambitious" projection by the company aimed for a 9 percent market share in North America and Europe by 2010; Android hit 72 percent last year. Google said in March that over 750 million Android devices have gone on line globally.

The Android operating system also eventually returned to its roots. Samsung has launched a Galaxy Camera that runs Android, along with similar offerings from makers including Nikon and Polaroid. The OS has been used in devices including tablets, TVs, espresso makers and refrigerators.

Rubin was a speaker at the Japan New Economy Summit held Tuesday in Tokyo. The summit was backed by a Japanese business group that aims to kick-start the country's economy.

In March, Google announced that Rubin had stepped down from his role leading Android in order to "start a new chapter" at the company.

Rubin said Tuesday he would continue to create products directed at end users.

"I can pretty much guarantee you that whatever I do next it's going to be something that delights consumers."
For more comprehensive coverage of the Android ecosystem, visit Greenbot.com.

Citizen Sensor - StormTag


StormTag Is A Waterproof Weather Sensor That Wants To Help Crowdsource Hyperlocal Forecasts


Hong Kong-Based Financial Startup WeLab Raises $14M Series A From Sequoia And TOM Group




StormTag is a key-fob sized sensor for measuring weather data that’s 
designed to contribute to a crowdsourced network of other sensors to 
map aggregated weather data and offer localized predictions. It works 
with Bluetooth LE smartphones and tablets to sync its data back to the 
cloud where users can view weather info in an app.

The StormTag sensor come in two versions: the basic StormTag, which 
includes a temperature and barometric pressure sensor and costs $25; 
or the $35 StormTag+ which also includes a humidity sensor, UV 
sensor, and on-board memory so it can log data for as long as the battery
 lasts and sync it later to a phone or tablet.

StormTag is currently a prototype while its makers run a Kickstarter 
crowdfunding campaign to get the device to market — aiming to ship 
to backers in November.

Flagship smartphones are getting increasingly sensor packed these days. S
amsung added temperature, pressure, and humidity sensors to its Galaxy S4 
device last year, for instance, giving it the ability to measure weather data. 
But there are plenty of lower priced handsets that don’t have such a fancy 
array of sensors — which means there’s scope for a standalone device that 
contains the necessary hardware and syncs with a smartphone to ferry its 
enviro data load up into the cloud.

Add to that, phones are often kept tucked away in a bag or pocket, rather than 
deliberately left exposed to the elements, so a standalone sensor for 
environmental data logging makes some sense.

StormTag is aiming to be such a standalone device — competing with the
likes of CliMate. Both devices are currently raising funds on Kickstarter, 
but StormTag has already more than doubled its original funding target 
of $17,500, so has the money to make it to market in the bag already — 
still with well over a month of its crowdfunding campaign to run.

StormTag is another hardware project from Jon Atherton, whose prior 
successfully crowdfunded creations include the YuFuNota and JaJa styli, 
and – more recently – an e-ink Bluetooth bedside clock, called aclock.

Atherton says he’s recycling some of the parts used in the YuFu stylus for 
StormTag — specifically the pressure sensitive electronics — which has 
allowed him to bring the StormTag to market with a relatively low funding 
target. CliMate, by comparison, is shooting for a $50,000 raise.

Atherton is also partnering with crowdsourced weather map WeatherSignal 
so users of StormTag get access to data being generated by WeatherSignal’s 
network of environment-sensor equipped Android phones from the get-go, to 
help circumvent the problem of needing a large uptake before StormTag 
starts generating useful data. It also means he doesn’t have to build his own 
app since WeatherSignal will be taking care of the end-user software.

“WeatherSignal already have a large body of data — and we will be adding t
o that with users of iOS other Android devices that don’t have inbuilt sensors,” 
says Atherton, nothing that the WeatherSignal app has 50,000 active devices, 
and 230,000 total installs.

“Globally, WeatherSignal is averaging out at just around 2 million readings 
per day. Note — each reading is a timestamped, geolocated set of sensor 
readings — so a single reading covers many sensors, so there are several 
million data points per day already stored in WeatherSignal. So StormTag 
builds on this WeatherSignal data.”

Atherton is giving the StormTag crowdsourced weather project a two-year 
timeframe to build into a really useful hyperlocal weather forecasting ecosystem.

“My two year target is to have accumulated enough historic data that we can 
do useful predictions using just one StormTag — we’ll also be building out 
the crowd sourced weather predictions and mapping,” he says, adding: “In the 
interim, we’ll be delivering some fantastic data to our users, as well as hyper 
local readings.”

The StormTag has a hole in it so it can easily clip onto keys or clothing, and 
is being designed to be waterproof so it can be used outdoors in scenarios 
like skiing or boating.

With StormTag’s funding target already met, Atherton says he’s kicked off 
the production process already – and reckons he’ll be able to deliver it to 
backers earlier than scheduled