Festive demand for gold in India got off to a tepid start, with local prices still at a heavy discount to the global benchmark, a bad sign for a period when buying is typically strong.Though sales picked up this week with the onset of the festival season, demand was lower than usual, retailers said, even as jewellers splashed newspapers across the country with ads promising good deals and discounts.Following the nine-day Hindu festival of Navratri, India celebrates Dussehra on Thursday, when buying of gold jewellery, coins or bars is considered auspicious.The fourth quarter is typically a strong period for gold purchases in India, the world’s second biggest bullion consumer, due to festivals and weddings.”This year fewer customers are visiting our showrooms compared to last year,” said Tanya Rastogi, a director at Lala Jugal Kishore Jewellers in the northern state of Uttar Pradesh.”For the last two to three years, gold has been giving negative returns. It has badly affected investment demand,” Rastogi said.Still, next month India will celebrate Dhanteras and Diwali, when demand could improve, she added.Global gold prices are on track to post their third straight annual loss this year, already down nearly 40 percent since hitting a record high in 2011.Dealers in India were offering a discount of $8 to $12 an ounce this week, compared to $7 to $11 last week.Demand from rural areas has been hit particularly hard as farmers suffer from the first back-to-back drought in India in three decades.Two-thirds of gold demand in India comes from farmers and residents of small villages who see jewellery as way to store wealth. But lower-than-normal monsoon rainfall this year due to El Nino weather pattern has eroded rural incomes.”Demand from rural areas has moderated due to drought. Jewellers from the countryside made thin purchases in the last few weeks,” said a Mumbai-based dealer with a private bank.Elsewhere in Asia, demand remained lacklustre. In top consumer China, prices on the Shanghai Gold Exchange ticked up to a premium from a small discount late last week, though dealers at bullion banks said physical buying wasn’t strong.”Demand is very sluggish,” said Ronald Leung, chief dealer at Lee Cheong Gold Dealers Ltd in Hong Kong, adding that a strong dollar and a recent price rally was hurting demand.In Hong Kong this week, premiums dropped to 80 to 90 cents an ounce, from $1.20-$1.30 last week.
We recently asked whether Dart programming was dead, but news of its death might well have been exaggerated. Version 2 of the programming language has just been released, with a range of updates and changes that should cement its popularity with admirers and win new users too. With Dart playing a big part in Google’s much-anticipated Flutter and Fuchsia projects, there’s a possibility that version 2.0 represents a brand new chapter in Dart’s life. News of a Dart ‘reboot’ first emerged in February 2018. Anders Thorhauge Sandholm said at the time that “with Dart 2, we’ve dramatically strengthened and streamlined the type system, cleaned up the syntax, and rebuilt much of the developer tool chain from the ground up to make mobile and web development more enjoyable and productive.” It would appear that six months later the team have finally delivered on their promise. They’ll be hoping it makes a positive impact on the language’s wider adoption. What’s new in Dart 2.0? There’s a whole host of changes that Dart developers will love, all of which can be found in the changelog on GitHub. Most notable is a stronger typed system, which includes runtime checks that will capture errors more effectively, and, for those developers working on Flutter, you can now create an instance of a class without using the “new” keyword. Among other updates, other key changes to Dart include: “Functions marked async now run synchronously until the first await statement. Previously, they would return to the event loop once at the top of the function body before any code runs.” “Constants in the core libraries have been renamed from SCREAMING_CAPS to lowerCamelCase.” “…New methods have been added to core library classes. If you implement the interfaces of these classes, you will need to implement the new methods.” All the changes you’ll find in Dart 2.0 amount to the same thing: improving the developer experience and making the code more readable. The obvious context to all this ‘reboot’ is that Google is betting on the growth of Flutter and Fuchsia over the next few years. With these improvements, it’s possible that we’ll begin to see Dart’s fortunes changing. CodeMentor may have called Dart the ‘worst programming language to learn in 2018’ at the start of the year, but it will be interesting to see if it’s popularity has grown by the time we hit 2019. You can download Dart 2.0.0 for Windows, Mac, and Linux here.
The U.S. Defense Advanced Research Projects Agency ( DARPA) has come out with AI-based forensic tools to catch deepfakes, first reported by MIT technology review yesterday. According to MIT Technology Review, the development of more tools is currently under progress to expose fake images and revenge porn videos on the web. DARPA’s deepfake mission project was announced earlier this year. Alec Baldwin on Saturday Night Live face swapped with Donald Trump As mentioned in the MediFor blog post, “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns”. This is one of the major reasons why DARPA Forensics experts are keen on finding methods to detect deepfakes videos and images How did deepfakes originate? Back in December 2017, a Reddit user named “DeepFakes” posted extremely real-looking explicit videos of celebrities. He used deep learning techniques to insert celebrities’ faces into adult movies. Using Deep learning, one can combine and superimpose existing images and videos onto original images or videos to create realistic-seeming fake videos. As per the MIT technology review,“Video forgeries are done using a machine-learning technique — generative modeling — lets a computer learn from real data before producing fake examples that are statistically similar”. Video tampering is done using two neural networks — generative adversarial networks which work in conjunction “to produce ever more convincing fakes”. Why are deepfakes toxic? An app named FakeApp was released earlier this year which helped create deepfakes quite easily. FakeApp uses neural networking tools developed by Google’s AI division. The app trains itself to perform image-recognition tasks using trial and error. Ever since its release, the app has been downloaded more than 120,000 times. In fact, there are tutorials online on how to create deepfakes. Apart from this, there are regular requests on deepfake forums, asking users for help in creating face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. Deepfakes is even be used to create fake news such as world leaders declaring war on a country. The toxic potential of this technology has led to a growing concern as deepfakes have become a powerful tool for harassing people. Once deepfakes found their way on the world wide web, many websites such as Twitter and PornHub, banned them from being posted on their platforms. Reddit also announced a ban on deepfakes, earlier this year, killing The “deepfakes” subreddit which had more than 90,000 subscribers, entirely. MediFor: DARPA’s AI weapon to counter deepfakes DARPA’s Media Forensics group, also known as MediFor, works in a group along with other researchers is set on developing AI tools for deepfakes. It is currently focusing on four techniques to catch the audiovisual discrepancies present in a forged video. This includes analyzing lip sync, detecting speaker inconsistency, scene inconsistency and content insertions. One technique comes from a team led by Professor Siwei Lyu of SUNY Albany. Lyu mentioned that they “generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well”. As the deepfakes are created using static images, Lyu noticed that that the faces in deepfakes videos rarely blink and that eye-movement, if present, is quite unnatural. An academic paper titled “In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking,” by Yuezun Li, Ming-Ching Chang and Siwei Lyu explains a method to detect forged videos. It makes use of Long-term Recurrent Convolutional Networks (LRCN). According to the research paper, people, on an average, blink about 17 times a minute or 0.283 times per second. This rate increases with conversation and decreases while reading. There are a lot of other techniques which are used for eye blink detection such as detecting the eye state by computing the vertical distance between eyelids, measuring eye aspect ratio ( EAR ), and using the convolutional neural network (CNN) to detect open and closed eye states. But, Li, Chang, and Lyu use a different approach. They rely on Long-term Recurrent Convolutional Networks (LRCN) model. They first perform pre-processing to identify facial features and normalize the video frame orientation. Then, they pass cropped eye images into the LRCN for evaluation. This technique is quite effective. It is also better as compared to other approaches, with a reported accuracy of 0.99 (LRCN) compared to 0.98 (CNN) and 0.79 (EAR). However, Lyu says that a skilled video editor can fix the non-blinking deepfakes by using images that shows blinking eyes. But, Lyu’s team has a secret effective technique in the works to fix even that, though he hasn’t divulged any details. Others in DARPA are on the look-out for similar cues such as strange head movements, odd eye color, etc as these little details are leading the team even closer to detection of deepfakes. As mentioned in the MIT Technology review post, “the arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths” and how”. Also, MediFor states that “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video”. Deepfakes need to stop and the U.S. Defense Advanced Research Projects Agency ( DARPA) seems all set to fight against them. Read Next Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news A new WPA/WPA2 security attack in town: Wi-fi routers watch out! YouTube has a $25 million plan to counter fake news and misinformation