Akamine Yuhei

写真a

Title

Associate Professor

Researcher Number(JSPS Kakenhi)

00433095

Mail Address

E-mail address

Current Affiliation Organization 【 display / non-display

  • Duty   University of the Ryukyus   Faculty of Engineering   School of Engineering_Computer Science and Intelligent Systems Program   Associate Professor  

  • Concurrently   University of the Ryukyus   Graduate School of Engineering and Science   Computer Science and Intelligent Systems   Associate Professor  

Academic degree 【 display / non-display

  • University of the Ryukyus -  Doctor (Engineering)

  • University of the Ryukyus -  Doctor of Engineering

External Career 【 display / non-display

  • 2006.03
    -
    2020.01

    University of the Ryukyus, Faculty of Engineering, Department of Information Engineering, Intelligent Information Systems, Research Associate  

  • 2020.04
     
     

     

Research Interests 【 display / non-display

  • deep learning

  • image processing

  • Augmented Reality

  • User Interface

Research Areas 【 display / non-display

  • Informatics / Intelligent informatics

  • Informatics / Intelligent informatics

Published Papers 【 display / non-display

  • MolGANの拡張による文章グラフを用いた文章生成手法の提案

    澤崎 夏希, 遠藤 聡志, 當間 愛晃, 山田 孝治, 赤嶺 有平

    知能と情報 ( 日本知能情報ファジィ学会 )  32 ( 2 ) 668 - 677   2020.04

    Type of publication: Research paper (scientific journal)

     View Summary

    <p>深層学習によって様々な分類問題が解決されているが,分類カテゴリ毎のデータ量が不均衡な問題を扱う場合,多くの課題がある.不均衡データへの対策として,少量カテゴリのデータ量を増加させ均衡化する手法がある.これをかさ増しと呼び画像処理分野ではノイズの付与や回転による方法が一般的である.最近ではGenerative Adversarial Network: GANによる画像生成手法を用いる場合がある.一方で,自然言語処理の分野では有効なかさ増し手法はいまだ確立されておらず,人手によるかさ増しが行われている.人手によるかさ増しではルールの設計など負担が大きく,機械的なかさ増し手法が必要となる.しかし,文章生成における機械的なかさ増しは画像生成に比べ不安定である.これは文章の特徴獲得の難しさが原因だと考えられる.そこで本論文ではグラフ情報に注目した機械学習による文章生成手法を提案する.CaboChaによって生成されたグラフ情報をGraph Convolutionにより畳み込み処理する.提案するGANにより生成されたかさ増し文章を3つの計算実験により評価し有効性を示した.</p>

  • Feature Acquisition and Analysis for Facial Expression Recognition Using Convolutional Neural Networks

    Nishime Taiki, Endo Satoshi, Toma Naruaki, Yamada Koji, Akamine Yuhei

    Transactions of the Japanese Society for Artificial Intelligence ( The Japanese Society for Artificial Intelligence )  32 ( 5 ) F-H34_1 - 8   2017

    Type of publication: Research paper (scientific journal)

     View Summary

    <p>Facial expressions play an important role in communication as much as words. In facial expression recognition by human, it is difficult to uniquely judge, because facial expression has the sway of recognition by individual difference and subjective recognition. Therefore, it is difficult to evaluate the reliability of the result from recognition accuracy alone, and the analysis for explaining the result and feature learned by Convolutional Neural Networks (CNN) will be considered important. In this study, we carried out the facial expression recognition from facial expression images using CNN. In addition, we analysed CNN for understanding learned features and prediction results. Emotions we focused on are “happiness", “sadness", “surprise", “anger", “disgust", “fear" and “neutral". As a result, using 32286 facial expression images, have obtained an emotion recognition score of about 57%; for two emotions (Happiness, Surprise) the recognition score exceeded 70%, but Anger and Fear was less than 50%. In the analysis of CNN, we focused on the learning process, input and intermediate layer. Analysis of the learning progress confirmed that increased data can be recognised in the following order “happiness", “surprise", “neutral", “anger", “disgust", “sadness" and “fear". From the analysis result of the input and intermediate layer, we confirmed that the feature of the eyes and mouth strongly influence the facial expression recognition, and intermediate layer neurons had active patterns corresponding to facial expressions, and also these activate patterns do not respond to partial features of facial expressions. From these results, we concluded that CNN has learned the partial features of eyes and mouth from input, and recognise the facial expression using hidden layer units having the area corresponding to each facial expression.</p>

  • Estimating Age on Twitter Using Self-Training Semi-Supervised SVM

    Iju Tatsuyuki, Endo Satoshi, Yamada Koji, Toma Naruaki, Akamine Yuhei

    人工生命とロボットに関する国際会議予稿集 ( 株式会社ALife Robotics )  21   228 - 231   2016.01

    Type of publication: Research paper (scientific journal)

     View Summary

    The estimation methods for Twitter user's attributes typically require a vast amount of labeled data. Therefore, an efficient way is to tag the unlabeled data and add it to the set. We applied the self-training SVM as a semi-supervised method for age estimation and introduced Plat scaling as the unlabeled data selection criterion in the self-training process. We show how the performance of the self-training SVM varies when the amount of training data and the selection criterion values are changed.

  • Feature Acquisition From Facial Expression Image Using Convolutional Neural Networks

    Nishime Taiki, Endo Satoshi, Yamada Koji, Toma Naruaki, Akamine Yuhei

    人工生命とロボットに関する国際会議予稿集 ( 株式会社ALife Robotics )  21   224 - 227   2016.01

    Type of publication: Research paper (scientific journal)

     View Summary

    In this study, we carried out the facial expression recognition from facial expression dataset using Convolutional Neural Networks (CNN). In addition, we analyzed intermediate outputs of CNN. As a result, we have obtained a emotion recognition score of about 58%; two emotions (Happiness, Surprise) recognition score was about 70%. We also confirmed that specific unit of intermediate layer have learned the feature about Happiness. This paper details these experiments and investigations regarding the influence of CNN learning from facial expression.

  • A study on emotion estimation of narratives using cognitive appraisals of the reader.

    Naruaki Toma, Yuhei Akamine, Koji Yamada, Satoshi Endo

    2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016, Budapest, Hungary, October 9-12, 2016 ( IEEE )    572 - 576   2016 [ Peer Review Accepted ]

    Type of publication: Research paper (other science council materials etc.)

display all >>

Other Papers 【 display / non-display

  • Consideration on Generation of Saliency Maps in Each Action of Deep Reinforcement Learning Agent

    NAGAMINE Kazuki, ENDO Satoshi, YAMADA Koji, TOMA Naruaki, AKAMINE Yuhei

    Proceedings of the Annual Conference of JSAI   2019 ( 0 ) 3K4J201 - 3K4J201   2019

     

  • 可視化によるDeep Q Networkの行動価値根拠の分析

    長嶺一輝, 遠藤聡志, 山田孝治, 當間愛晃, 赤嶺有平

    情報科学技術フォーラム講演論文集   17th   319‐320   2018.09

     

    J-GLOBAL

  • 品詞情報の埋め込みを利用したDual‐Embeddings‐CNNによる属性語抽出

    前田裕一朗, 遠藤聡志, 山田孝治, 當間愛晃, 赤嶺有平

    情報科学技術フォーラム講演論文集   17th   189‐190   2018.09

     

    J-GLOBAL

  • 倒置法を利用した記事タイトル生成に関する検証

    伊藤巧, 赤嶺有平, 山田孝治, 遠藤聡志, 當間愛晃

    情報処理学会全国大会講演論文集   79th ( 2 ) 2.577‐2.578   2017.03

     

    J-GLOBAL

  • A Basic Experiment of Extracting Evaluative Targets from Movie Review

    阿波連智恵, 當間愛晃, 遠藤聡志, 山田孝治, 赤嶺有平

    インテリジェント・システム・シンポジウム(CD-ROM)   27th   ROMBUNNO.1B1‐2   2017

     

    J-GLOBAL

display all >>

Presentations 【 display / non-display

  • 3D Digital archiving historic sites in Okinawa and a visualization web application

    NEROME Moeko, AKAMINE Yuhei

    JSAI Technical Report, SIG-KBS  2022  -  2022 

    CiNii Research

  • An Automatic Generation of 3D Plant Models by Genetic Algorithms into Plant-hormones based Growth Model

    Kazuki Uehara, Yuhei Akamine, Satoshi Endo and Moeko Nerome

    The Society for Art and Science will held the 12th annual international conference “NICOGRAPH International 2013”  2013.06  -  2013.06 

  • Development of a Simulation System for the Spread of Northern Heisohere Forest Fires

    Wataru Uema, Satoshi Endo, Yuhei Akamine, Keiji Kimura, Honma Toshihisa

    CCAS-SICE International Joint Conference 2009 Final Program and Papers  2009.09  -  2009.09 

  • Development of Simulation for the Spread of the Boreal Forest Fires

    UEMA Wataru, IHA Yusuke, ENDO Satoshi, AKAMINE Yuhei, KIMURA Keiji, HONMA Toshihisa

    The 8th International Conference on Applications and Principles of information Science  2009.01  -  2009.01 

  • 階層型協調交通システムにおけるデマンドバス輸送の経路計画手法

    第14回情報科学技術フォーラム講演論文集  1900.01  -  1900.01 

display all >>

Grant-in-Aid for Scientific Research 【 display / non-display

  • Grant-in-Aid for Scientific Research(C)

    Project Year: 2022.04  -  2027.03 

    Direct: 3,300,000 (YEN)  Overheads: 4,290,000 (YEN)  Total: 990,000 (YEN)

  • Grant-in-Aid for Scientific Research(C)

    Project Year: 2019.04  -  2022.03 

    Direct: 3,000,000 (YEN)  Overheads: 3,900,000 (YEN)  Total: 900,000 (YEN)

  • Digital guidance system with interactive media using augmented reality for museum

    Grant-in-Aid for Scientific Research(C)

    Project Year: 2019.04  -  2022.03 

    Investigator(s): Akamine Yuhei 

    Direct: 3,300,000 (YEN)  Overheads: 4,290,000 (YEN)  Total: 990,000 (YEN)

     View Summary

    We developed a digital guidance system that runs on tablets and smartphones using augmented reality for use in museums and related technology. We aimed to develop a system that not only provides information from the exhibitor's side, but also estimates the visiters's level of interest in individual exhibits and visitor's reactions to exhibits. We developed a user interface that allows users to easily retrieve information about exhibits they are interested in, superimposing technique to displaying information on camera images in real time, estimating method for visitors flow and interests, and a method to automatically search similar exhibits. The contents that applied some of the developed techniques were actually exhibited in a special exhibition at a museum, and received high evaluation.

  • The Development of a Guidance System using a Robust Markerless Augmented Reality for Out-of-door Exhibitions

    Grant-in-Aid for Scientific Research(C)

    Project Year: 2014.04  -  2017.03 

    Investigator(s): NEROME Moeko 

    Direct: 2,800,000 (YEN)  Overheads: 3,640,000 (YEN)  Total: 840,000 (YEN)

     View Summary

    In this study, we proposed and developed a fast specific object detection method using frequency distributions of color and intensity gradient on a target object's image. Our goal of the study is to develop a digital guidance system for out-of-door exhibitions using augmented reality technique that can run on mobile devices such as smart phones. The proposed method is able to detect specific objects fast and is robust to environmental changes that can be a factor causing a decrease in the precision of the detection such as illumination changes and secular changes. These changes can be occurred out-of-door exhibitions. In addition, our method realizes to perform the detection in practical speed on low power devices such as smart phones because the method requires shorter learning and detection time than “deep learning".

display all >>

Social Activity 【 display / non-display

  • 2019.11
    -
    2020.01