卡一卡二卡三国色天香永不失联-看a网站-看黄视频免费-看黄网站免费-4虎影院最近地址-4虎最新地址

雅思閱讀材料:Eye robot

雕龍文庫 分享 時間: 收藏本文

雅思閱讀材料:Eye robot

  Poor eyesight remains one of the main obstacles to letting robots loose among humans. But it is improving, in part by aping natural vision

  ROBOTS are getting smarter and more agile all the time. They disarm bombs, fly combat missions, put together complicated machines, even play football. Why, then, one might ask, are they nowhere to be seen, beyond war zones, factories and technology fairs? One reason is that they themselves cannot see very well. And people are understandably wary of purblind contraptions bumping into them willy-nilly in the street or at home.

  All that a camera-equipped computer sees is lots of picture elements, or pixels. A pixel is merely a number reflecting how much light has hit a particular part of a sensor. The challenge has been to devise algorithms that can interpret such numbers as scenes composed of different objects in space. This comes naturally to people and, barring certain optical illusions, takes no time at all as well as precious little conscious effort. Yet emulating this feat in computers has proved tough.

  In natural vision, after an image is formed in the retina it is sent to an area at the back of the brain, called the visual cortex, for processing. The first nerve cells it passes through react only to simple stimuli, such as edges slanting at particular angles. They fire up other cells, further into the visual cortex, which react to simple combinations of edges, such as corners. Cells in each subsequent area discern ever more complex features, with those at the top of the hierarchy responding to general categories like animals and faces, and to entire scenes comprising assorted objects. All this takes less than a tenth of a second.

  The outline of this process has been known for years and in the late 1980s Yann LeCun, now at New York University, pioneered an approach to computer vision that tries to mimic the hierarchical way the visual cortex is wired. He has been tweaking his convolutional neural networks ever since.

  Seeing is believing

  A ConvNet begins by swiping a number of software filters, each several pixels across, over the image, pixel by pixel. Like the brains primary visual cortex, these filters look for simple features such as edges. The upshot is a set of feature maps, one for each filter, showing which patches of the original image contain the sought-after element. A series of transformations is then performed on each map in order to enhance it and improve the contrast. Next, the maps are swiped again, but this time rather than stopping at each pixel, the filter takes a snapshot every few pixels. That produces a new set of maps of lower resolution. These highlight the salient features while reining in computing power. The whole process is then repeated, with several hundred filters probing for more elaborate shapes rather than just a few scouring for simple ones. The resulting array of feature maps is run through one final set of filters. These classify objects into general categories, such as pedestrians or cars.

  Many state-of-the-art computer-vision systems work along similar lines. The uniqueness of ConvNets lies in where they get their filters. Traditionally, these were simply plugged in one by one, in a laborious manual process that required an expert human eye to tell the machine what features to look for, in future, at each level. That made systems which relied on them good at spotting narrow classes of objects but inept at discerning anything else.

  Dr LeCuns artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, at first, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically 10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.

  This still requires human input, though. The next stage is unsupervised learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.

  In a tribute to natures nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-training is shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable ofand those tend to be much slower.

  This approach need not be confined to computer-vision. In theory, it ought to work for any hierarchical system: language processing, for example. In that case individual sounds would be low-level features akin to edges, whereas the meanings of conversations would correspond to elaborate scenes.

  For now, though, ConvNet has proved its mettle in the visual domain. Google has been using it to blot out faces and licence plates in its Streetview application. It has also come to the attention of DARPA, the research arm of Americas Defence Department. This agency provided Dr LeCun and his team with a small roving robot which, equipped with their system, learned to detect large obstacles from afar and correct its path accordinglya problem that lesser machines often, as it were, trip over. The scooter-sized robot was also rather good at not running into the researchers. In a selfless act of scientific bravery, they strode confidently in front of it as it rode towards them at a brisk walking pace, only to see it stop in its tracks and reverse. Such machines may not quite yet be ready to walk the streets alongside people, but the day they can is surely not far off.

  

  Poor eyesight remains one of the main obstacles to letting robots loose among humans. But it is improving, in part by aping natural vision

  ROBOTS are getting smarter and more agile all the time. They disarm bombs, fly combat missions, put together complicated machines, even play football. Why, then, one might ask, are they nowhere to be seen, beyond war zones, factories and technology fairs? One reason is that they themselves cannot see very well. And people are understandably wary of purblind contraptions bumping into them willy-nilly in the street or at home.

  All that a camera-equipped computer sees is lots of picture elements, or pixels. A pixel is merely a number reflecting how much light has hit a particular part of a sensor. The challenge has been to devise algorithms that can interpret such numbers as scenes composed of different objects in space. This comes naturally to people and, barring certain optical illusions, takes no time at all as well as precious little conscious effort. Yet emulating this feat in computers has proved tough.

  In natural vision, after an image is formed in the retina it is sent to an area at the back of the brain, called the visual cortex, for processing. The first nerve cells it passes through react only to simple stimuli, such as edges slanting at particular angles. They fire up other cells, further into the visual cortex, which react to simple combinations of edges, such as corners. Cells in each subsequent area discern ever more complex features, with those at the top of the hierarchy responding to general categories like animals and faces, and to entire scenes comprising assorted objects. All this takes less than a tenth of a second.

  The outline of this process has been known for years and in the late 1980s Yann LeCun, now at New York University, pioneered an approach to computer vision that tries to mimic the hierarchical way the visual cortex is wired. He has been tweaking his convolutional neural networks ever since.

  Seeing is believing

  A ConvNet begins by swiping a number of software filters, each several pixels across, over the image, pixel by pixel. Like the brains primary visual cortex, these filters look for simple features such as edges. The upshot is a set of feature maps, one for each filter, showing which patches of the original image contain the sought-after element. A series of transformations is then performed on each map in order to enhance it and improve the contrast. Next, the maps are swiped again, but this time rather than stopping at each pixel, the filter takes a snapshot every few pixels. That produces a new set of maps of lower resolution. These highlight the salient features while reining in computing power. The whole process is then repeated, with several hundred filters probing for more elaborate shapes rather than just a few scouring for simple ones. The resulting array of feature maps is run through one final set of filters. These classify objects into general categories, such as pedestrians or cars.

  Many state-of-the-art computer-vision systems work along similar lines. The uniqueness of ConvNets lies in where they get their filters. Traditionally, these were simply plugged in one by one, in a laborious manual process that required an expert human eye to tell the machine what features to look for, in future, at each level. That made systems which relied on them good at spotting narrow classes of objects but inept at discerning anything else.

  Dr LeCuns artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, at first, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically 10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.

  This still requires human input, though. The next stage is unsupervised learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.

  In a tribute to natures nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-training is shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable ofand those tend to be much slower.

  This approach need not be confined to computer-vision. In theory, it ought to work for any hierarchical system: language processing, for example. In that case individual sounds would be low-level features akin to edges, whereas the meanings of conversations would correspond to elaborate scenes.

  For now, though, ConvNet has proved its mettle in the visual domain. Google has been using it to blot out faces and licence plates in its Streetview application. It has also come to the attention of DARPA, the research arm of Americas Defence Department. This agency provided Dr LeCun and his team with a small roving robot which, equipped with their system, learned to detect large obstacles from afar and correct its path accordinglya problem that lesser machines often, as it were, trip over. The scooter-sized robot was also rather good at not running into the researchers. In a selfless act of scientific bravery, they strode confidently in front of it as it rode towards them at a brisk walking pace, only to see it stop in its tracks and reverse. Such machines may not quite yet be ready to walk the streets alongside people, but the day they can is surely not far off.

  

信息流廣告 周易 易經 代理招生 二手車 網絡營銷 旅游攻略 非物質文化遺產 查字典 社區團購 精雕圖 戲曲下載 抖音代運營 易學網 互聯網資訊 成語 成語故事 詩詞 工商注冊 注冊公司 抖音帶貨 云南旅游網 網絡游戲 代理記賬 短視頻運營 在線題庫 國學網 知識產權 抖音運營 雕龍客 雕塑 奇石 散文 自學教程 常用文書 河北生活網 好書推薦 游戲攻略 心理測試 石家莊人才網 考研真題 漢語知識 心理咨詢 手游安卓版下載 興趣愛好 網絡知識 十大品牌排行榜 商標交易 單機游戲下載 短視頻代運營 寶寶起名 范文網 電商設計 免費發布信息 服裝服飾 律師咨詢 搜救犬 Chat GPT中文版 經典范文 優質范文 工作總結 二手車估價 實用范文 古詩詞 衡水人才網 石家莊點痣 養花 名酒回收 石家莊代理記賬 女士發型 搜搜作文 石家莊人才網 鋼琴入門指法教程 詞典 圍棋 chatGPT 讀后感 玄機派 企業服務 法律咨詢 chatGPT國內版 chatGPT官網 勵志名言 河北代理記賬公司 文玩 語料庫 游戲推薦 男士發型 高考作文 PS修圖 兒童文學 買車咨詢 工作計劃 禮品廠 舟舟培訓 IT教程 手機游戲推薦排行榜 暖通,電地暖, 女性健康 苗木供應 ps素材庫 短視頻培訓 優秀個人博客 包裝網 創業賺錢 養生 民間借貸律師 綠色軟件 安卓手機游戲 手機軟件下載 手機游戲下載 單機游戲大全 免費軟件下載 石家莊論壇 網賺 手游下載 游戲盒子 職業培訓 資格考試 成語大全 英語培訓 藝術培訓 少兒培訓 苗木網 雕塑網 好玩的手機游戲推薦 漢語詞典 中國機械網 美文欣賞 紅樓夢 道德經 標準件 電地暖 網站轉讓 鮮花 書包網 英語培訓機構 電商運營
主站蜘蛛池模板: 免费看片aⅴ免费大片 | 最新黄色在线 | free中国性xxxx| 亚洲欧美精品综合中文字幕 | 深夜免费在线观看 | 亚洲国产精品网站久久 | 国产一级一级一级国产片 | 中文字幕人成乱码在线观看 | 国内精品免费 | 精品视频手机在线观看免费 | 天天操天天爽天天射 | 怡红院免费的全部视频 | 国产成人精品一区二三区2022 | 波多野结衣 在线资源观看 波多野结衣 一区二区 | 国产青草视频免费观看97 | 欧美一级高清免费a | 免费视频 久久久 | 欧美日韩亚洲综合在线一区二区 | 久久97精品久久久久久久不卡 | 热热涩热热狠狠色香蕉综合 | 伊人精品综合 | 成人国产欧美精品一区二区 | www黄在线观看 | 一级毛片免费高清视频 | 久久久午夜影院 | 国产无遮挡色视频免费视频 | 中文字幕免费观看 | 精品一区视频 | 欧美a欧美乱码一二三四区 欧美a在线看 | 久久精品国产清白在天天线 | 高清一级做a爱过程不卡视频 | 亚洲欧美日韩国产精品久久 | 日韩亚洲欧美日本精品va | 在线观看视频中文字幕 | 欧美日韩综合 | 看黄免费在线 | 天天做人人爱夜夜爽2020 | 麻豆xxxxhd videos| 狠狠干天天爱 | 欧美三级在线 | 按摩毛片 |