‘iPhone 6s’ Mass Production Ramping Up in Late August Ahead of Mid-September Debut
KGI Securities analyst Ming-Chi Kuo today provided more information on the launch of the next series of iPhones, the so-called “iPhone 6s” and “iPhone 6s Plus,” noting that mass production on the smartphone line will begin in late August, according to Taiwan’s Central News Agency [Google Translate] (via GforGames). The production ramp-up will reportedly come roughly 1-2 weeks later than originally expected, but the change should not affect Apple’s launch plans, which are anticipated involve a mid-September debut as has been typical for the past several years.
iPhone 6s display assembly
The analyst notes that Foxconn is manufacturing 60 percent of all iPhone 6s models, with Pegatron taking the remaining 40 percent. Foxconn will reportedly handle all of the manufacturing for the larger-screened “iPhone 6s Plus.”
Components for the iPhone 6s such as A9 chips and Force Touch-equipped display panels have been said to have been in production for several weeks now, and all of these components will be brought together at Foxconn’s and Pegatron’s facilities for final assembly.
Apple is facing increasing demand for the new line of iPhones, and a few of Foxconn’s factories — such as the one in Zhengzhou — are rumored to be ramping up overtime for employees heading into mass production just to be able to meet demand. GforGames also notes that slowing production of the iPhone 6 line ahead of the updated models is providing some breathing room for the manufacturing and production of the new iPhones.
Earlier in the year, Kuo predicted a mid-to-late August time window for mass production on the new line of iPhones, so today’s 1-2 week delay still falls inside the previous estimate. Recent rumors regarding the “iPhone 6s” and “iPhone 6s Plus” include the addition of Force Touch, a slightly thicker body to accommodate the new technology, camera improvements and perhaps a rose gold or pink color option.
Jimmy Iovine Hints at Possible Apple Music-Like Apple TV Curation, Admits ‘Connect’ as Weak Point
Following an interview yesterday with Evening Standard, Wired today posted an interview with Apple Music executive Jimmy Iovine, in which the Beats co-founder admits the company’s need to work to make Connect a better platform for artists and fans alike, and even hints at a possible curation aspect for Apple TV, similar to that of the company’s new streaming music service.
“We all know one thing, we all have different television delivery systems, don’t we all wish that the delivery systems were better, as far as curation and service?” he says.
“They’re all technically good. And Netflix is starting to cross the code because they’re starting to make some original content. It is really good, but still I mean none of us make movies here right, so we’re all punters, or what do you call them in the music business, fans right? We want to watch movies. Sit down with your girlfriend or a bunch of friends and try to find a movie online. That box helps you none — it doesn’t help. You’re on your own. And eventually that will catch them unless somebody digs in and really helps the customer. And entertainment needs that, it needs to live and breathe.”
Iovine admits, however, that he wouldn’t be the one spearheading such an innovation for the TV side of things, doubly noting that if such a curation aspect for Apple TV did appear, it probably won’t be for some time. Speaking candidly with Wired, he said, “I’ll tell you man, right now, this [music] is so daunting that I can’t even think about anything else.”
Before WWDC this year, a brand-new A8-Based Apple TV Box was expected to premiere alongside a long-rumored Apple streaming content service. New rumors point towards a September reveal, alongside the new iPhones, for the set-top box, with no word yet regarding what stages of development the streaming service is in. No specific reference to in-depth curation has been made in the past regarding the next generation of Apple TV, however.
Elsewhere in the interview, Iovine tackles the subject of Apple Music’s Connect service, which allows artists to upload videos, songs, and short blog posts to keep fans up-to-date on the behind-the-scenes aspect of their work. As of yet, not much noise has been made coming out of the social network aspect of Apple Music, and Iovine knows the company has to work hard to make Connect what he and his team intended it to be in the first place.
“We have to prove [Connect’s value to artists], and we will slowly prove that. That will be the piece of the service that comes along last, or later, and we have some real plans,” he tells WIRED. “We’re building it out a lot more, it needs a lot of technical work as well. But we believe we’ll get there and it’ll be a great place for artists to communicate and with a lot of independence and freedom to do what they want to do. But we’re still building it.”
Wired‘s full interview with Iovine is worth a read, with more in-depth looks behind the team that curates Apple Music for its users, his history in making and producing records for artists like Eminem and Dr. Dre, and even his opinions on Apple’s old earbuds as a fuel for co-founding Beats.
Deal Alert: OPPO R5 flash sale cuts the price to €239

OPPO is currently running a flash sale for its R5 smartphone, cutting the price from €399 to just €239, a saving of 40 percent off the usual retail cost.
The OPPO R5 offers a slim and sleek design at just 4.85mm thick, solid build quality and an excellent display for the price, but we did find the camera a little lacking.
For hardware specifications, €239 will net you an octa-core Qualcomm Snapdragon 615 processor, 2GB of RAM, a 13 megapixel rear camera, 5 megapixel front facing camera, and 16GB of internal storage. The display measures 5.2-inches and has a resolution of 1920×1080, which is an excellent feature at this price point. However, the battery is only 2,000mAh, leaving the handset a little short for a full day of heavy usage.
Read the review: Oppo R5 Review
The OPPO R5 is available in either silver or the limited edition grey color options and can be ordered directly from OPPO’s European online store. However, this promotion is a pre-order, so your new smartphone won’t ship out until the first week of September. I suppose there is always a catch when a price is reduced this heavily, but it’s still quite a bargain.
Using shared element transitions in activities and fragments

Historically, transitions between activities and fragments in Android involved animating the entire view hierarchy. However, with Material Design, it is now easier to animate selected Views during a transition to emphasize continuity, and guide users to content in the target Activity/Fragment.
Many times, there are similar widgets between the start and target activities, and shared element transitions, when used effectively, blurs the boundary between both activities, and the switch between both activities becomes less jarring, and feels natural and unforced. Shared element transitions can be used to guide the user to new content, and its position in the new Activity.
Before we begin, note that shared element transitions discussed here require Android 5.0 (API 21) and above, even using the Support libraries. As such, the code is littered with checks for the build version. In addition, while similar, shared element transitions are different for activities and fragments, and so, we will discuss them separately.
Shared elements in Activities
In the sample project, available on github, we implement two activities, StartActivity and EndActivity.
The StartActivity has an ImageView, a Button and a TextView saved in res/layout/activity_start.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_margin="16dp">
<ImageView
android:id="@+id/imageView"
android:layout_width="120dp"
android:layout_height="120dp"
android:layout_centerHorizontal="true"
android:src="@drawable/aa_logo_green"
android:transitionName="@string/activity_image_trans"/>
<TextView
android:id="@+id/textView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@id/imageView"
android:text="Simple TextView"
android:textSize="20sp"/>
<Button
android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/textView"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:text="Click Me"
style="@style/Widget.AppCompat.Button.Borderless"
android:onClick="onClick"/>
</RelativeLayout>
The EndActivity has two ImageViews and an EditText, and is saved in res/layout/activity_end.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_margin="16dp">
<ImageView
android:id="@+id/imageView"
android:layout_width="220dp"
android:layout_height="220dp"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:layout_alignParentBottom="true"
android:src="@drawable/aa_logo_green"
android:transitionName="@string/activity_image_trans"/>
<EditText
android:id="@+id/editText"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentStart="true"
android:layout_alignParentLeft="true"
android:hint="An EditText"
android:textSize="24sp"/>
<ImageView
android:id="@+id/smallerImageView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_toLeftOf="@id/imageView"
android:layout_alignBottom="@id/imageView"
android:layout_alignParentStart="true"
android:layout_alignParentLeft="true"
android:src="@drawable/aa_logo_blue"/>
</RelativeLayout>
Take note of the transitionName attribute in both the StartActivity and EndActivity. This attribute is used to guide track shared elements between both activites. Shared elements do not need to have the same id, and do not even have to be of the same widget type. As will be shown subsequently, you can have a shared element transition from one type of View to virtually any other.
As long as you have defined the same transitionName for both Views, performing a shared element transition becomes pretty straightforward. In StartActivity,
public void onClick(View view)
Intent intent = new Intent(this, EndActivity.class);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
ActivityOptionsCompat options = ActivityOptionsCompat.
makeSceneTransitionAnimation(this, imageView, getString(R.string.activity_image_trans));
startActivity(intent, options.toBundle());
else
startActivity(intent);
We called ActivityOptionsCompat.makeSceneTransitionAnimation(), since we extend the AppCompatActivity for our Activity classes. The method expects the start activity, the View to transition to the target activity, and the shared element transitionName. (NOTE: If you are not using the Support Libraries, then you would use ActivityOptions rather than ActivityOptionsCompat).
However, it is highly likely that your app contains some dynamically generated content, such that setting the transitionName in xml becomes not feasible. Not to worry, the transitionName can be set in code, in both the start and end activities respectively, and the transition will still get executed.
Think about the process flow. Before the transition begins, the target activity’s layout must be known. Hence, the onCreate() method of the target activity must have been called, and the correct view inflated. Therefore, in the onCreate() method, we can identify the target views and set the transitionName.
The code snippet below uses three shared elements, with two of them set dynamically.
public void onClick(View view)
View imageView = findViewById(R.id.imageView);
View textView = findViewById(R.id.textView);
View button = findViewById(R.id.button);
if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
textView.setTransitionName(getString(R.string.activity_text_trans));
button.setTransitionName(getString(R.string.activity_mixed_trans));
Intent intent = new Intent(this, EndActivity.class);
Pair<View, String> pair1 = Pair.create(imageView, imageView.getTransitionName());
Pair<View, String> pair2 = Pair.create(textView, textView.getTransitionName());
Pair<View, String> pair3 = Pair.create(button, button.getTransitionName());
ActivityOptionsCompat options = ActivityOptionsCompat.
makeSceneTransitionAnimation(this, pair1, pair2, pair3);
startActivity(intent, options.toBundle());
else
startActivity(intent);
We create Pair objects, containing the desired beginning View, and the transitionName for that View. The makeSceneTransitionAnimation() method we use here expects an activity, followed by a list of Pair objects, containing all desired shared transition Views. (Recall that we are using the support library, so we import the android.support.v4.util.Pair class, rather than android.util.Pair)
In EndActivity, we find the two target views that need transitionNames to be set dynamically. The TextView, from StartActivity, transitions into an ImageView, while the Button transitions into an EditText.
@Override
protected void onCreate(Bundle savedInstanceState)
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_end);
View smallImageView = findViewById(R.id.smallerImageView);
View editText = findViewById(R.id.editText);
if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
smallImageView.setTransitionName(getString(R.string.activity_text_trans));
editText.setTransitionName(getString(R.string.activity_mixed_trans));
Be careful with shared element transitions. You do not want too many shared elements, which can become distracting and confusing rather than smooth and natural. You want to direct user focus in a non obtrusive manner.
Shared Elements with Fragments
Shared element transitions with Fragments works in an idealistically similar way to Activities shown above. We implement two Fragments, StartFragment and EndFragment.
StartFragment contains a single ImageView, and a ListView. The ImageView has a transitionName attribute set, to guide a static transition.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="80dp"
android:src="@drawable/nav_image"
android:transitionName="@string/fragment_image_trans"/>
<ListView
android:id="@+id/listView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_below="@id/imageView">
</ListView>
</RelativeLayout>
The EndFragment contains two ImageViews and a single TextView. One of the ImageViews has its transitionName attribute set.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="24dp"
android:orientation="vertical">
<ImageView
android:id="@+id/listImage"
android:layout_width="120dp"
android:layout_height="120dp"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:src="@drawable/aa_logo_blue"/>
<TextView
android:id="@+id/smallerImageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerVertical="true"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:text="Sample"
android:textSize="24sp"/>
<ImageView
android:id="@+id/otherImage"
android:layout_width="120dp"
android:layout_height="120dp"
android:layout_alignParentBottom="true"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:layout_marginTop="24dp"
android:src="@drawable/nav_image"
android:transitionName="@string/fragment_image_trans"/>
</RelativeLayout>
Shared element transitions between activities has a sane default transition, that works pretty much as expected. For fragments, however, you have to specify a Transition. We define a transition set in res/transition, called change_image_trans. The transition set contains two transition types, changeTransform and changeBounds. changeTransform captures scale and rotation for Views before and after the scene change, while changeBounds captures the layout bounds of target views before and after the scene change. Both transition types also handle the animation of the changes between both target Views. As a result, we can be certain our Views would scale up or down in size as necessary, in addition to starting and ending at the correct location on screen.
<?xml version="1.0" encoding="utf-8"?>
<transitionSet xmlns:android="http://schemas.android.com/apk/res/android">
<changeTransform />
<changeBounds />
</transitionSet>
To animate the transition in a FragmentTransaction, we call setSharedElementEnterTransition() and setEnterTransition() on the target fragment (EndFragment). We also call setSharedElementReturnTransaction() and setExitTransition() on the start fragment (StartFragment).
Finally, while building the FragmentTransaction, we call addSharedElement(), containing the initial View, and the transitionName.
@Override
public void onItemClick(AdapterView<?> parent, View view, int position, long id)
ImageView staticImage = (ImageView) getView().findViewById(R.id.imageView);
EndFragment endFragment = new EndFragment();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
setSharedElementReturnTransition(TransitionInflater.from(
getActivity()).inflateTransition(R.transition.change_image_trans));
setExitTransition(TransitionInflater.from(
getActivity()).inflateTransition(android.R.transition.fade));
endFragment.setSharedElementEnterTransition(TransitionInflater.from(
getActivity()).inflateTransition(R.transition.change_image_trans));
endFragment.setEnterTransition(TransitionInflater.from(
getActivity()).inflateTransition(android.R.transition.fade));
Bundle bundle = new Bundle();
bundle.putString("ACTION", textView.getText().toString());
bundle.putParcelable("IMAGE", ((BitmapDrawable) imageView.getDrawable()).getBitmap());
endFragment.setArguments(bundle);
FragmentManager fragmentManager = getFragmentManager();
fragmentManager.beginTransaction()
.replace(R.id.container, endFragment)
.addToBackStack("Payment")
.addSharedElement(staticImage, getString(R.string.fragment_image_trans))
.commit();
To implement multiple shared elements as well as dynamically generated shared elements, we use a ListView, with list items containing an ImageView and a TextView. In the ListAdapter, we set a transitionName for both the ImageView and the TextView for every list item.
class MyListAdapter extends ArrayAdapter<String>
@Override
public View getView(int position, View convertView, ViewGroup parent)
View view = convertView;
if (view == null)
view = LayoutInflater.from(getContext()).inflate(R.layout.list_item, null);
TextView textView = (TextView) view.findViewById(R.id.textView);
ImageView imageView = (ImageView) view.findViewById(R.id.imageView);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
textView.setTransitionName("transtext" + position);
imageView.setTransitionName("transition" + position);
return view;
Setting up the transition is similar to the single element transition. The main difference here is we fetch the two dynamically generated transition names, and wrap them in a bundle for the target fragment. Finally, for each shared element, we call the addSharedElement() method.
@Override
public void onItemClick(AdapterView<?> parent, View view, int position, long id)
String imageTransitionName = "";
String textTransitionName = "";
ImageView imageView = (ImageView) view.findViewById(R.id.imageView);
TextView textView = (TextView) view.findViewById(R.id.smallerImageView);
...
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
...
imageTransitionName = imageView.getTransitionName();
textTransitionName = textView.getTransitionName();
bundle.putString("TRANS_NAME", imageTransitionName);
bundle.putString("TRANS_TEXT", textTransitionName);
endFragment.setArguments(bundle);
...
fragmentManager.beginTransaction()
.replace(R.id.container, endFragment)
.addToBackStack("Payment")
.addSharedElement(imageView, imageTransitionName)
.addSharedElement(textView, textTransitionName)
.addSharedElement(staticImage, getString(R.string.fragment_image_trans))
.commit();
In EndFragment, we retrieve the transition names, and update the target Views.
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState)
Bundle bundle = getArguments();
String actionTitle = "";
Bitmap imageBitmap = null;
String transText = "";
String transitionName = "";
if (bundle != null)
transitionName = bundle.getString("TRANS_NAME");
actionTitle = bundle.getString("ACTION");
imageBitmap = bundle.getParcelable("IMAGE");
transText = bundle.getString("TRANS_TEXT");
getActivity().setTitle(actionTitle);
View view = inflater.inflate(R.layout.fragment_end, container, false);
if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP)
view.findViewById(R.id.listImage).setTransitionName(transitionName);
view.findViewById(R.id.smallerImageView).setTransitionName(transText);
((ImageView) view.findViewById(R.id.listImage)).setImageBitmap(imageBitmap);
((TextView) view.findViewById(R.id.smallerImageView)).setText(actionTitle);
return view;
Wrap-up
Well planned transitions and animations provides an app with a premium feel, and will be pleasurable for users. Movements between activities and fragments will appear to flow naturally, and will also guide a user’s focus towards the relationship between the new screen and the previous screen. The complete source code for the tutorial is available on GitHub, and can be used and modified to your heart’s content. Happy coding.
D-Link is now offering the world’s first WiFi water sensor
D-Link is now offering the industry’s first ever WiFi water sensor that does pretty much what you’d expect it to. It connects to a wall outlet and emits a siren alarm if it ever comes into contact with water, which is great for alerting you to flooding or malfunctioning appliances.
Since it’s a WiFi sensor, it can also connect to D-Link’s mobile app on your smartphone to remotely notify you about any flooding, so you’re still aware of the problem even if you aren’t at home. That’s a pretty useful extra tool if you like to keep a connected home with other smart appliances.
D-Link also announced a WiFi siren extender that you can put in another room, so if the water sensor goes off it’ll also set off the extender via your WiFi network. No matter where you’re at in your house, you’ll be made very well aware of the fact that something is flooding.
D-Link is currently selling the water sensor for $49, although Newegg and Amazon are both carrying it for slightly higher pricing. It’s out of stock on all three sites, but should be back in stock soon. Considering water damage will almost always end up costing thousands of dollars to fix, this wouldn’t be a bad investment for less than a hundred bucks.
source: D-Link
Come comment on this article: D-Link is now offering the world’s first WiFi water sensor
SmartPrompt Pan is a digital frying pan with a screen in the handle, now up for pre-order
Someone somewhere is trying to turn the next mundane device or appliance into some kind of smart, connected home gadget, and that includes the SmartPrompt Pan. The SmartPrompt by Key Ingredient is a frying pan with a screen built right into the handle, and a ton of sensors baked in (no pun intended) to help you be a more effective cook.
The screen on the 12-inch skillet tells you the current temperature you’re cooking at, plus it gives you step by step instructions if you’re following one of the 2 million available recipes on the service. It wouldn’t be complete without being able to connect to your smartphone, of course, as the frying pan will give you alerts if your food is simmering, boiling, or burning, so you won’t have to constantly keep an eye on what you’re cooking.
But really, if you’re cooking something, you should probably pay attention to it anyway, because your smartphone isn’t good at putting out fires.
The SmartPrompt pan is up for pre-order on IndieGogo for $99, which is 50% off the price it will actually launch at. In addition to the pan, you can also order a smart frying pan sans the screen and just use your smartphone for monitoring, and the company is also offering a smart probe for measuring temperatures of things like meat while cooking, which will also send the info directly to your smartphone to check on.
To me, this seems like a solution to a problem we didn’t really have, but if cooking is your passion and you don’t mind having another device laying around the kitchen to potentially keep charged, this might be a good addition for you.
source: Key Ingredient
Come comment on this article: SmartPrompt Pan is a digital frying pan with a screen in the handle, now up for pre-order
LG’s G4 officially certified as a secure device by the US government
LG can now proudly say their current flagship, the G4, is officially secure enough for use in the US government after passing some pretty strict testing.
The device meets the US National Security Agency’s National Information Assurance Partnership standards, certifying it for use in over 25 countries, including the US. It meets international security standards as well as the US government’s Cryptographic Modules standards, which is a pretty big accomplishment and opens up a large new market for the G4.
LG GATE is LG’s own security platform and offers enhanced platform, network, and application security. It’s a benefit to large enterprise customers and helps keep sensitive data secured.
LG’s previous flagship, the G3, received the same NIAP certification, but it was also approved for use by the Department of Defense. The G4 is currently undergoing DoD testing.
source: LG
Come comment on this article: LG’s G4 officially certified as a secure device by the US government
Microsoft’s Surface Pro 3 is coming to the NFL sidelines
With NFL teams set to kick off their pre-season next week, Microsoft today announced that the Surface Pro 2 won’t be used during games anymore. Instead, the upcoming 2015-2016 season will see its Sideline Viewing System, which lets players and coaches review game photos instantly, be powered by Surface Pro 3s for the first time. But that’s not the only change being made. Microsoft and the NFL are also going to start testing video as part of the Sideline Viewing System, including replays of questionable calls that referees can watch via the custom-made device. Even though this is only going to be a test run (over 20 pre-season games), it will be a significant move if it ends up being implemented in the regular season. Microsoft’s laptop/tablet hybrid has already replaced the traditional paper method for most teams, and it could do the same with monitors on the field — the current way of watching replays by officials.Slideshow-309999
For consumers, Microsoft’s revealed a new app for Xbox One and, of course, Windows 10. Aside from getting a redesign to match the looks of the freshly released OS, the application will allow fans to get more detailed player stats. This includes velocity, speed and total distance ran, whereas previously that was limited to catches, touchdowns, yards covered and more standard categories. Then there’s Game Day Notifications, designed to help fantasy league users who just want to get specific alerts while games are happening — like big play alerts, scores, highlights and other updates from the players they own. Most importantly, you can now set two favorite teams as well, which is a good thing for every fantasy football buff with a Windows 10 computer or an Xbox One.
Filed under:
Misc, Home Entertainment, Laptops, Tablets, HD, Microsoft
Tags: Football, hands-on, hdpostcross, Microsoft, MicrosoftSurfacePro3, NFL, Sideline Viewing System, Sports, Surface Pro 3, SurfacePro3, Windows10
ICYMI: Drone goes fishin’, reflection fix for photos & more
![]()
Today on In Case You Missed It: A $200,000 drone is helping wildlife officials protect fish from poaching and it looks super cool too. A new algorithm to eliminate reflections, raindrops and chain-link fences from photos is being tinkered with and we’d like it on all our photos now, please. And researchers at the University of Tokyo have a new prototype 3D projector that can project onto moving surfaces, no matter how much they shake.
And truly, the one thing you have to see: Tesla’s charger prototype that is cool, yes, but mostly just grossing us all out.
If you come across any interesting videos, we’d love to see them. Just tweet us with the #ICYMI hashtag @engadget or @mskerryd.
Filed under:
Cameras, Misc, Handhelds, Transportation, Science, Internet, Software
Tags: autonomous charger, drones, drones for poaching, engadget daily show, engadget video, holodeck, icymi, In Case You Missed It, photo algorithm, photos, poachers, projector, science, scientists, Tesla, Tesla charger, UAV, video
Gnarbox puts a video editing suite in your back pocket
Gnarbox is a modern solution to a modern problem. Right now, if you’re shooting video outdoors, you’ll either need your laptop with you, or (more likely) have to wait until you get back to base to make an edit. By which time, the moment has gone, and your footage risks ending up stockpiled on a memory card or hard drive. Gnarbox tackles this issue by bundling a WiFi hard drive with a quad-core processor, 4GB of RAM, a dedicated GPU, about seven hours battery life and a comprehensive mobile app. With just the paperback-sized device and a phone you can make decent edits, even with 4K video, and share them right after the wave/ride/moment.
The four key claims of Gnarbox are to help you: backup, organize, edit and share your footage. Portability is key, allowing edits right after the event, and quickly. The Gnarbox has 128GB of storage (it’s less about archiving, and more about backing up your memory cards in the wild), and up to four phones — iOS now, Android at launch — can connect to access media at the same time, it’s also waterproof to one meter (IP67). The idea is the quad-core processor and octa-core GPU do the heavy lifting, so no matter what your phone, you can slice and dice 4K video like it aint no thang.
https://player.vimeo.com/video/133421185
This sounds great, but are we not moving the problem a little further up the line? You still have to actually edit the video. Gnarbox CEO Tim Feess told Engadget how the team tried to eliminate friction points there, too. “To clip at the exact frame you want, you’re able to swipe right or left on a paused video to go frame-by-frame. It’s like you can physically touch the media files.” Feess also believes that by making the product platform agnostic (it works with all DSLRs and action cameras), and being a joint hardware-software offering, the edit experience is more consistent and more feature rich than what camera makers, or app makers can offer on their own.
Video editing options include frame-by-frame trimming, slo- and fastmo, changing aspect ratios (hello Instagram video), and adjusting light, color, filters or layering music. There’s also a little bit of software magic going on behind the scenes to help you find highlights and key moments — a bit like how other cameras do. You can export videos in full resolution (in a laundry list of formats); this isn’t about making mobile-friendly videos, though it’s very capable at that, with direct export options to Facebook, Twitter, YouTube, Vimeo among others.
While Gnarbox is currently running its Kickstarter, the product has been under development for two years, and is currently in Alpha testing. It already passed its funding goal by quite a margin, but if you want to get in quick, there’s still time to grab one of the $150 early bird offers, before it hits the shelves at $250 around spring 2016.
Filed under:
Cameras, Misc, Peripherals
Source:
Kickstarter
Tags: actioncamera, dslr, edit, gnarbox, gopro, video











