Recent 必威英雄联盟Questions - betway电竞Artificial Intelligence Stack Exchange most recent 30 from www.1kvaups.com 2019-04-05T07:37:39Z //www.1kvaups.com/feeds http://www.creativecommons.org/licenses/by-sa/3.0/rdf //www.1kvaups.com/q/11666 1 Which learning t必威电竞asks do brains use to train themselves to see? Pablo Messina //www.1kvaups.com/users/12746 2019-04-05T06:34:04Z 2019-04-05T07:04:58Z

In computer vision is very common to use supervised t必威电竞asks,where datasets have to be manually annotated by humans.Some examples are object classification (class labels),detection (bounding boxes) and segmentation (pixel-level m必威电竞asks).But animals don't need anybody to show them bounding boxes or m必威电竞asks on top of things in order for them to learn to detect objects and make sense of the visual world around them.This leads me to think that brains must be performing some sort of self-supervision to train themselves to see.What does current research say about the learning paradigm used by brains to achieve such an outstanding level of visual competence?Which t必威电竞asks do brains use to train themselves to be so good at processing visual information?Finally,can we apply these insights in computer vision?

//www.1kvaups.com/q/11661 1 Architecture and Use of Different Algorithms for Health Goal Feedback Invic18 //www.1kvaups.com/users/23726 2019-04-05T03:26:43Z 2019-04-05T03:34:53Z

I wanted to get some opinions from the community for a certain problem that I will be approaching.

The problem is to provide feedback to a user based on a image of the upper male torso.The image would either reflect something positive like increasing muscle mass or decreasing muscle mass or both and gaining adipose tissue would be seen as negative as well as muscle atrophy.

Using the users input such as (sleep data,food,training routine) among some other data I would like to provide feedback such as "no John,this exercise has not yielded desirable results" or "a combination of your recent dietary change has caused strength loss" obviously this is a complex issue and has a lot of interconnected variables and potentials but you get the idea high-level at least and if you don't - Please 必威电竞ask.

So my idea so far would be to use a CNN that holds the picture of the torso,using a softmax function we could run this through a model to estimate bodyfat and doing the same with a model trained on muscle mass using those two models we could paint a pretty accurate picture of someones physique if they're going in the right direction or not;we could then go on to analyse what that user may have done/has not done to yield a result - Obviously there would be connected models here and many different combinations of algorithms applied such as CNN,RNN and others.Really curious to here your response(s) thank you in advance.

//www.1kvaups.com/q/11660 0 Selecting SD Storage size for Jetson Nano gatorback //www.1kvaups.com/users/18819 2019-04-04T23:03:37Z 2019-04-05T04:39:30Z

When selecting an SD card for the Jetson Nano:

  • What is the compelling reason to buy memory beyond the requirement of the OS?
  • Are there any rules of thumb / formulas to properly size the SD card?
//www.1kvaups.com/q/11659 0 Denoising and Improving the Quality of Scanned Books Ventsislav //www.1kvaups.com/users/23715 2019-04-04T22:25:48Z 2019-04-04T22:25:48Z

I want to restore missing parts from a character.You can see some examples after the pages are edited with Scantailor:

enter image description here

Note that I want to keep the same font,so I don't want just to OCR and create the pages of the scanned book with some new font.

My question is: Can I use machine learning to solve this problem?

//www.1kvaups.com/q/11655 2 Is max pooling really bad? user559678 //www.1kvaups.com/users/23688 2019-04-04T20:10:23Z 2019-04-05T05:55:17Z

That has been discussion onthis.Maybe from Hinton himself.And I heard that many max pooling layers have been replaced by conv layers in recent years,is that true?

//www.1kvaups.com/q/11654 0 How is the length of an input sequence related to the structure of an RNN? AntonYellow //www.1kvaups.com/users/23717 2019-04-04T19:56:50Z 2019-04-04T20:38:29Z

My question is only with regards to the feedforward part of an RNN.I am following these steps.

I am working on prediction of a time series.The time series is a toy model generated by me.It is composed by 200 numbers: 150 for train and 50 for validation.Given a sequence of 50 numbers,it should predict the 51st number.If x1=1,2,....50,then y1=51.If x2=2,3,.....51,then y2=52 and so on.I have 100 inputs and 100 outputs.

I don't understand how this sequence is related to the simple architecture of an RNN.In this architecture the hidden(t) is obtained by input * hiddenmat * hiddenmat(t-1).Do I have to sequentially feed each input to the different RNNs extended in time and calculate all the outputs with the global loss over the time span?Then I need 100 input neutrons?Given an input sequence length,what is the number of input neurons I need?

Thank you for your help!It seems a silly question but I got stuck conceptually on this.

//www.1kvaups.com/q/11648 1 What is the motivation behind using a deterministic policy? Tracy Yang //www.1kvaups.com/users/23707 2019-04-04T17:43:07Z 2019-04-04T19:32:49Z

What is the motivation behind using a deterministic policy?Given that the environment is uncertain,it seems stochastic policy makes more sense.

//www.1kvaups.com/q/11643 0 Back propagation on Flatten Layer in CNN Clement Hui //www.1kvaups.com/users/23713 2019-04-04T15:26:30Z 2019-04-04T19:52:23Z

I am making a NN library without any other external NN lib and is implementing the Flatten layer.I know the forward implementation of flatten layer but is the backward just reshaping it or not?If yes is it I can just call a simple numpy reshape function to reshape it?

//www.1kvaups.com/q/11640 0 How large should the replay buffer be? Tracy Yang //www.1kvaups.com/users/23707 2019-04-04T14:40:34Z 2019-04-04T18:59:24Z

I'm learning DDPG algorithm by following the following link:Open AI Spinning Up document on DDPG,where it is written

In order for the algorithm to have stable behavior,the replay buffer should be large enough to contain a wide range of experiences,但它可能并不总是保持一切。

What does this mean?Is it related to the tuning of the parameter of the batch size in the algorithm?

//www.1kvaups.com/q/11639 2 Inform policy learning of environment constants Seanny123 //www.1kvaups.com/users/23703 2019-04-04T14:22:51Z 2019-04-04T15:16:02Z

Policy learning refers to mapping an agent state onto an action to maximize reward.A linear policy,such as the one used in theAugmented Random Search paper,refers to learning a linear mapping between state and reward.

When the entire state changes at each time-step,for example in theContinuous Mountain Car OpenAI Gym,the position and speed of the car changes at each time-step.

However,assume we also wanted to communicate the constant position of one or more goals.For example,if there was a goal on the leftandright of the Mountain Car.Are there examples of how this constant/static information be communicated from the environment other than appending the location of the two goals to the state vector?Can static/constant state be differentiated from state which changes with each action?

//www.1kvaups.com/q/11634 0 Is any classifier not subject (or less susceptible) to fooling? user559678 //www.1kvaups.com/users/23688 2019-04-04T04:19:09Z 2019-04-04T20:07:12Z

Is any classifier not subject to fooling as inhere?

I agree that the question is related to the other one as commented by Philip.But I guess it is not completely a duplication as pointed out by hisairnessag3.What I wanted to 必威电竞ask is that any classifiers inherently do not subject (or less prone) to attack.I have a feeling that non-linear classifiers should be less susceptible to attack.Btw,any benchmark on say,simple K-nearest neighbor classifiers is available?

//www.1kvaups.com/q/11629 0 Does backpropagation update weights one layer at a time? Joshua Jones //www.1kvaups.com/users/23687 2019-04-03T23:36:00Z 2019-04-04T20:29:57Z

I am new to Deep Learning.Suppose that we have a neural network with one input layer,one output layer,and one hidden layer.Let's refer to the weights from input to hidden as$W$and the weights from hidden to output as$V$.Suppose that we have initialized$W$and$V$,and ran them through the neural network via the Feedforward algorithm.Suppose that we have calculated$V$via backpropagation.When estimating the ideal weights for$W$,do we keep the weights$V$constant when updating$W$via gradient descent given we already calculated$V$,or do we allow$V$to update along with$W$?

So,in the code,which I am trying to do from scratch,do we include$V$in the for loop that will be used for gradient descent to find$W$?In other words,do we simply use the same$V$for every iteration of gradient descent?

//www.1kvaups.com/q/11627 1 How to detect LEGO bricks by using a deep learning approach? melawiki //www.1kvaups.com/users/23682 2019-04-03T18:50:29Z 2019-04-04T20:42:22Z

In my thesis I dealt with the question how a computer can recognize LEGO bricks.With multiple object detection,I chose a deep learning approach.I also looked at an existing training set of LEGO brick images and tried to optimize it.

My approach

By using Tensorflow's Object Detection API on a dataset of specifically generated images (Created with Blender) I was able to detect 73.3% of multiple LEGO Bricks in one Foto.

One of the main problems I noticed was,that I tried to distinguish three different 2x4 bricks.However,colors are difficult to distinguish,especially in different lighting conditions.A better approach would have been to distinguish a 2x4 from a 2x2 and a 2x6 LEGO brick.

Furthermore,I have noticed that the training set should best consist of "normal" and synthetically generated images.The synthetic images give variations in the lighting conditions,the backgrounds,etc.,which the photographed images do not give.However,when using the trained Neural Network,photos and not synthetic images are examined.Therefore,photos should also be included in the training data set.

One last point that would probably lead to even better results is that you train the Neural Network with pictures that show more than one LEGO brick.Because this is exactly what is required by the Neural Network when it is in use.

  • Are there other ways I could improve upon this?

(Can you see any further potential for improvement for the Neural Network?How would you approach the issue?Do any of my approaches seem poor?How do you solve the problem?)

//www.1kvaups.com/q/11588 1 Creative AI semester project (4 week time-frame) Kate Catalena //www.1kvaups.com/users/23622 2019-04-01T18:41:45Z 2019-04-04T17:50:53Z

I am taking AI this semester and we have a semester project.We can choose just about anything.I was wondering if anyone has a creative idea that I might be able to do.Nothing that is so extensive that it cannot be finished in four weeks.Any help would be so appreciated!

Some background information: I am a graduate student in CS,but this is my first AI course.My research area is in the space of data mining and analytics.I am open to doing anything that seems interesting and creative.

//www.1kvaups.com/q/11575 2 Cold start collaborative filtering with NLP Derek Hans //www.1kvaups.com/users/23599 2019-04-01T04:53:30Z 2019-04-04T20:24:16Z

I'm looking to match two pieces of text - e.g.IMDb movie descriptions and each person's description of the type of movies they like.I have an existing set of ~5000 matches between the two.I particularly want to overcome the cold-start problem: what movies to recommend to a new user?When a new movie comes out,to which users should it be recommended?I see two options:

  1. Run each description of a person through an LSTM;do the same for each movie description;concatenate the results for some subset of possible combinations of people and movies,and attach to a dense net to then predict whether it's a match or not
  2. Attempt to augment collaborative filtering with the output from running the movie description and person description through a text learner.

Are these tractable approaches?

//www.1kvaups.com/q/11452 -2 Who was Henry White Pierce? Manuel Rodriguez //www.1kvaups.com/users/11571 2019-03-25T21:52:23Z 2019-04-04T21:07:49Z

Recently,in an online obituary,it was announced that Henry White Pierce died at the age of 88.According to the news-headline,he was a researcher in the topic of organ transplantation,in-vitro fertilization,and betway电竞artificial intelligence.According to Google Scholar the name "HW Pierce" is listed as an author in the database who has published decades ago about some topics.But the fulltext is stored in jstor,so i can't read it.

My question is: Does anybody know him personally?Has he published a lot about betway电竞Artificial Intelligence?Do we have to remember of him?

The reason why I'm 必威电竞asking is simple.Academic progress can only be made on the shoulder of giants.The lifespan of a single person is not enough to research a topic in depth.What the later generation can do is to take the scientific legacy of a professor and try to remember him.If the work of a person is lost,it's not possible to reference to papers from him.What i don't understand is,why somebody can die and all his work get lost and nobody remembers him.

//www.1kvaups.com/q/11163 2 Should we multiply the target of actor by the importance sampling ratio when prioritized replay is applied to DDPG? Sherwin Chen //www.1kvaups.com/users/8689 2019-03-12T01:11:11Z 2019-04-04T16:37:20Z

According toPER,we have to multiply the$Q$error$\delta_i$by the importance sampling ratio to correct the bias introduced by the imbalance sampling of PER,where importance sampling ratio is defined$$w_i=\left({1\over N}{1\over P(i)}\right)^\beta$$in which$1/N$is the probability of drawing a sample uniformly from the buffer,and$P(i)$is the probability of drawing a sample from PER.

I'm wondering if we have to do the same to the target of the actor when we apply PER to DDPG.That is,multiplying$-Q(s_i,\mu(s_i))$by$w_i$,where$\mu$is the outcome of the actor.

In my opinion,it is necessary.And I've done some experiments in the gym environmentBipedalWalker-v2.The results,however,is quite confusing: I constantly get better performance when I do not apply importance sampling to the actor.Why would this be the case?

//www.1kvaups.com/q/11004 0 Attempting to solve a optical character recognition t必威电竞ask using a feed-forward network Chal.lo //www.1kvaups.com/users/22869 2019-03-04T21:13:16Z 2019-04-04T13:02:00Z

I am doing some experimentation on neural networks,and for that I am trying to program a plainOCRt必威电竞ask.I have learned CNNs are the best choice ,but for the time being and due to my inexperience,I wanna go step by step and start with feedforward nets.

So my training data is a set of roughly 400 16*16 images extracted from a script that draws every alphabet char in a tiny image for a small set of fonts registered in my computer.

Then the test data set is extracted from the same procedure,but for all fonts in my computer.

Well,results are quite bad.Get an accuracy of aprox.45-50%,which is very poor...but that's not my question.

The point is that I can't get the MSE below 0.0049,no matter what hidden layer distribution I apply to the net.I have tried with several architectures and all comes down to this figure.Does that mean the net cannot learn any further given the data?

This MSE value however throws this poor results too.

I am using Tensorflow API directly,no keras or estimators and for a list of 62 recognizable characters these are examples of the architectures I have used: [256,1860,62] [256,130,62] [256,256,128 ,62] [256,3600,62] ....

But never get the MSE below 0.0049,and still results are not over 50%.

Any hints are greatly appreciated.

//www.1kvaups.com/q/10318 1 Can I use deterministic policy gradient methods for stochastic policy learning? Xuezhou Zhang //www.1kvaups.com/users/21892 2019-01-31T01:33:56Z 2019-04-04T16:37:46Z

Can I treat a stochastic policy (over a finite action space of size$n$) as a deterministic policy (in the set of probability distribution in$\mathbb{R}^n$)?

It seems to me that nothing is broken by making this mental translation,except that the "induced environment" now has to take a stochastic action and spit out the next state,which is not hard using on the original environment.Is this legit?If yes,how does this "deterministify then DDPG" approach compare to,for example,A2C?

//www.1kvaups.com/q/9137 1 What are some limitations of using Collaborative Deep learning for Recommender systems? Ritesh sawant //www.1kvaups.com/users/10118 2018-11-24T06:07:42Z 2019-04-04T20:40:18Z

Recently I worked on a paper by Hao Wang,Collaborative Deep learning for Recommender Systems;which uses a two way tightly coupled method,Collaborative filtering for Item correlation and Stacked Denoising Autoencoders for the Optimization of the problem.

I want to know the limitations of using stacked Autoencoders and Hierarchical Bayesian methods to Recommender systems.

//www.1kvaups.com/q/8049 1 In what way can we measure control between humans and machines? Douglas Daseeco //www.1kvaups.com/users/4302 2018-09-19T20:37:49Z 2019-04-05T06:26:16Z

This is not a soft question.Neither is this question related to singularity conjecture or wars with robots.

This question seeks a mathematical formulation of what is currently only qualitative and thus not clearly understood.It relates to servitude,dominance,and what measure of control species of biological or artificial entities exert over others.

Dominance Relationships Quantified

We do know and rarely doubt or argue about the following somewhat self-evident statement.

Control implies dominance.

This question focuses on how we can evaluate quantitatively whether humans are dominant over artificial systems or whether those artificial systems now dominate humans.This may seem esoteric or philosophical to some,but it is not.The balance of power between humans and artificial systems is a concrete phenomenon that may be accurately represented as a function of discrete events.

We see artificial systems,with varying degrees of automation,adaptability,intelligence,and other qualitative features succumbing to the controlling forces of humans that deploy them to serve humanity without question.This is the focus of technophiles.

We also see an ever increasing number of articles about game addiction,social net addiction,and texting addiction on the web,which,at its current trend will possibly surpass the volume of heroin addiction articles.We see the number of hours humans in industrialized countries interact with display devices with an ever increasing proportion of the visual content being generated artificially.This is the focus of technophobes.

What is the balance of this equilibrium?

In biological systems,we see that termites are highly adaptive and can eat human habitats,yet humans can build with insect resistant materials and apply insecticides.Those methods of control are greater than the control exhibited over wood,as remarkable as those who study termites say it is.

An Example Mathematical Model

The above statement of inference,"Control implies dominance," can be represented in many formal ways.This is an example mathematical model that exhibits some features of importance but is not fully developed as a model.

  • $o_{e\epsilon}$ is the obedience exhibited by entity $e$ to commands given by entity $\epsilon$.
  • $m_{e\epsilon}$ is the mechanical compliance exhibited by entity $e$ to manipulations instrumented by entity $\epsilon$.
  • $i_{e\epsilon}$ is the concession of entity $e$ to influences created by entity $\epsilon$.
  • $u_{e\epsilon}$ is the unconscious purposeful behavior exhibited by entity $e$ in response to hidden manipulations instrumented by entity $\epsilon$.
  • $T$ is the measurement time period.
  • $D_{e\epsilon}$ is the dominance of entity $e$ over entity $\epsilon$.

$\sum_T o_{ab} + \sum_T m_{ab} + \sum_T i_{ab} + \sum_T u_{ab} > \sum_T o_{ba} + \sum_T m_{ba} + \sum_T i_{ba} + \sum_T u_{ba} \implies D_{ba} > 0$

The sum,over any given measurement period,of forms of control of $a$ over $b$,when greater than that sum in the opposite direction,implies that $a$ is dominant over $b$.

Inclusion of Non-adversarial Interaction

Similarly,symbiosis implies collaboration.

This may directly relate to the question because not all interaction between entities,types of entities,species,or artificial systems are adversarial.In fact,it is highly probable that there is more collaboration than dominance in the world.This may be a basic fact about economics.Let's examine this related inference using the same mathematical strategy.

  • $c_{e\epsilon}$ is the conscious symbiotic tie of entity $e$ to entity $\epsilon$.
  • $b_{e\epsilon}$ is the mechanical binding of entity $e$ to entity $\epsilon$.
  • $q_{e\epsilon}$ is the asymmetry in an equilibrium based tie between entity $e$ and entity $\epsilon$.
  • $C_{e\epsilon}$ is the collaboration between entity $e$ and entity $\epsilon$.

$\sum_T c_{ab} + \sum_T b_{ab} + \sum_T q_{ab} > \sum_T c_{ba} + \sum_T b_{ba} + \sum_T q_{ba} \implies C_{ba} > 0$

The sum,over any given measurement period,of forms of symbiosis between $a$ and $b$,implies that there is positive collaboration between entities $a$ and $b$.

Returning to the Focal Question

In what way can we measure control between humans and machines?

The below 必威英雄联盟questions are not THE question.The above one is.However,these may elucidate the relevance of the primary question.

  • Exactly how much are humans and artificial systems collaborating symbiotically?
  • How much are they adversarial in some way,and,in that respect,which side is dominant and to what degree?
  • Are there classes of artificial systems that dominate over classes of humans,as in technology enthusiasts that have quantifiable debt resulting from technology purposes?
  • Are there classes of humans that dominate over artificial systems,like government entities that monitor and can regulate the packets of information over the Internet between nations?

Most if not all of this is measurable,yet no commonly known body of theory has emerged that measures it so that public awareness of its state relative to artificial systems can be known rather than discussed without any basis for knowledge.

There should be.

//www.1kvaups.com/q/6579 4 What is experience replay in laymen's terms? user491626 //www.1kvaups.com/users/15967 2018-05-30T19:09:05Z 2019-04-04T16:09:05Z

I've been reading Google's DeepMind Ataripaperand I'm trying to understand the concept of "experience replay".Experience replay comes up in a lot of other reinforcement learning papers (particularly,the AlphaGo paper),so I want to understand how it works.Below are some excerpts.

First,we used a biologically inspired mechanism termed experience replay that randomizes over the data,thereby removing correlations in the observation sequence and smoothing over changes in the data distribution.

The paper then elaborates as follows (I've taken a screenshot,since there are a lot of mathematical symbols that are difficult to reproduce):

enter image description here

What is experience replay and what are its benefits in laymen's terms?

//www.1kvaups.com/q/5645 2 Q-learning in Python Jessica Chambers //www.1kvaups.com/users/12940 2018-03-12T10:23:44Z 2019-04-04T18:36:45Z

I'm working on a q-learning project that involves a "robot" solving a maze,and there is a problem with how I update the Q values (every time the robot ends up switching between two squares instead of actually learning) but I'm not sure where: I am at my wits end.Any pointers are welcome,here is the minimal viable example (I really can't condense it much more)..Thanks!

from enum import Enumimport numpy as npfrom random import randrangeimport stringimport randomclass Direction(Enum):    up=0    down=1    left=2    right=3stepsTaken=0nbMaxSteps=500Q = {}gamma=0.95strat=1epsilon=0.99maze=[]penalty=0#values of each movementstep=-1stepTrap=-20stepExit=500stepWall=-100#current position of the robotposition=[0,0]#funciton that checks if a certain place in the Q matrix is empty,returns 1 if it isdef currentQEmpty():    global Q    global position    moves=[]    if (position[0]!=0):        moves.append(Direction.left)    if (position[0]!=cols-1):        moves.append(Direction.right)    if (position[1]!=0):        moves.append(Direction.down)    if (position[1]!=rows-1):        moves.append(Direction.up)    for d in moves:        if (Q.get((position[0],position[1],d),'A')=='A'):            return 1    return 0#intialise the Q matrixcols=10rows=10values=np.zeros((rows,cols))for x in range(rows):        for y in range(cols):            for dir in Direction:                Q[(x,y,dir)] = 0#fills the Q matrix (replaces empty values only)def QFill(moves):    global maze    global position    global Q    global step    global stepTrap    global stepWall    global stepExit    global gamma    for d in moves:        reward=0        newpos=position        if d==Direction.up:            newpos=[position[0],position[1]+1]        if d==Direction.down:            newpos=[position[0],position[1]-1]        if d==Direction.left:            newpos=[position[0]-1,position[1]]        if d==Direction.right:            newpos=[position[0]+1,position[1]]        reward=reward+values[newpos[0],newpos[1]]        if(Q.get((position[0],position[1],d),0)==0):            Q[position[0],position[1],d]=reward#Qmove: decides which move to make depending on current Q values#this is where the issue is!def Qmove(moves):    global position    global Q    global step    global stepTrap    global stepWall    global stepExit    global gamma    bestd=0    newd=moves[random.randint(0,len(moves)-1)]    for d in moves:        newpos=position        if d==Direction.up:            newpos=[position[0],position[1]+1]        if d==Direction.down:            newpos=[position[0],position[1]-1]        if d==Direction.left:            newpos=[position[0]-1,position[1]]        if d==Direction.right:            newpos=[position[0]+1,position[1]]        #update value to best value of new position        if Q.get((newpos[0],newpos[1],d),0)>=Q.get((newpos[0],newpos[1],bestd),0):            bestd=d        Q[position[0],position[1],d]=Q.get((position[0],position[1],d),0)+ (values[newpos[0]][newpos[1]] + gamma * Q.get((newpos[0],newpos[1],bestd),1) - Q.get((position[0],position[1],d),0))              #update arrow        if Q.get((position[0],position[1],d),0)>Q.get((position[0],position[1],newd),0):            newd=d    return newd#create mazech=['0','1','3']for i in range(cols):    maze.append([0]*(cols))    for j in range(cols):        random_index = randrange(0,len(ch))        maze[i][j]=ch[random_index]        if i==cols-1 and j==cols-1:            maze[i][j]='5'        if i==0 and j==0:            maze[i][j]='0'        if(maze[i][j]=="1"):            values[i][j]=step        elif(maze[i][j]=="0"):            values[i][j]=stepWall        elif(maze[i][j]=="3"):            values[i][j]=stepTrap        else:            values[i][j]=stepExit#movewhile(stepsTaken
       
//www.1kvaups.com/q/5638 3 Hand computing feed forward and back propagation of neural network Eka //www.1kvaups.com/users/39 2018-03-12T01:49:01Z 2019-04-05T05:43:07Z

I used to treat back propagation as a black box but lately I want to understand more about it.I have usedmattmuzr'sandDuttA'sexplanaiton as a guide to hand compute a simple neural network.I have computed feed forward and back propagation to a network similar to this one with one input,one hidden and one output

enter image description here

Here are my computations

enter image description here

enter image description here

enter image description here

Is my computations correct?

*Full latex code

//www.1kvaups.com/q/5042 1 How do stacked denoising autoencoders work Ritesh sawant //www.1kvaups.com/users/10118 2018-01-17T12:49:10Z 2019-04-04T20:38:39Z

I've been studying a recommender system which uses a collaborative deep learning approach and Bayesian learning.It has the following NN representation :

sdae

I need to know the working of stacked denoising autoencoders.

Here is the link to the paper:http://www.wanghao.in/paper/KDD15_CDL.pdf

//www.1kvaups.com/q/4307 0 Recommendation system based on content type abc //www.1kvaups.com/users/10279 2017-10-20T11:10:19Z 2019-04-04T20:38:06Z

I am new to this field and would like to know that for what kind of data types other than images,recommendation system can be created using machine learning.Suppose for contents like audio or video,is it necessary to use data set of actual audio files or video files or just text information about the file is enough for this system ?

//www.1kvaups.com/q/4236 -1 Over-exposure of certain items in content based recommendation engine Hartger //www.1kvaups.com/users/10092 2017-10-09T13:13:35Z 2019-04-04T20:38:48Z

I'm working on a content based recommendation engine for ebooks.I create document vectors with 300 features for every ebook using a word2vec model trained on google news and determine recommendations based on the closest vectors.So far I have run tests on a dataset consisting of 200 books from the gutenberg project in four different categories.

I find that a small group of books appears to be recommended a lot more then others.The most recommended book is recommended for over half the dataset.This book is Theaetus,in which themost commmon tokensare fairly specific to the book (tokens like socrates,theaetus).I can find no intuitive reason why this book would match so well with over half my dataset.

Is this common behavior when using document vectors to determine similarity?Are there any methods that would reduce this effect?

//www.1kvaups.com/q/2865 3 Why does the cost function contain a 2 at the denominator? Dmitry Nalyvaiko //www.1kvaups.com/users/5661 2017-02-23T14:41:58Z 2019-04-04T17:36:23Z

A cost function used in machine is often the following

$$C = \frac{1}{2} \| y - \hat{y} \| ^2$$

Why is there$\frac{1}{2}$in front of the squared distance?

//www.1kvaups.com/q/2347 4 Could betway电竞artificial intelligence cause problems for humanity after figuring out human behavior? quintumnia //www.1kvaups.com/users/1581 2016-11-17T18:22:39Z 2019-04-04T21:57:54Z

This BBC articlesuggests that intelligent algorithms,like those that select news stories and advertisements for display,could control our experience of the world and manipulate us.

My question is:Will betway电竞Artificial Intelligence someday become a problem to humanity after learning human behaviors and characteristics?

//www.1kvaups.com/q/1989 6 Is there a trade-off between flexibility and efficiency? Tariq Ali //www.1kvaups.com/users/181 2016-09-18T19:43:02Z 2019-04-04T20:37:24Z

A "general intelligence" may be capable of learning a lot of different things,but possessing capability does not equal actually having it.The "AGI" must learn...and that learning process can take time.If you want an AGI to drive a car or play Go,you have to find some way of "teaching" it.Keep in mind that we have never built AGIs,so we don't know how long the training process can be,but it would be safe to assume pessimistic estimates.

Contrast that to a "narrow intelligence".The narrow AI already knows how to drive a car or play Go.It has been programmed to be very excellent at one specific t必威电竞ask.You don't need to worry about training the machine,because it has already been pre-trained.

A "general intelligence" seems to be more flexible than a "narrow intelligence".You could buy an AGI and have it drive a carandplay Go.And if you are willing to do more training,you can even teach it a new trick:how to bake a cake.I don't have to worry about unexpected t必威电竞asks coming up,since the AGI willeventuallyfigure out how to do it,given enough training time.I would have to wait along timethough.

A "narrow intelligence" appears to bemore efficientat its assigned t必威电竞ask,due to it being programmed specifically for that t必威电竞ask.It knows exactly what to do,and doesn't have to waste time "learning" (unlike our AGI buddy here).Instead of buying one AGI to handle a bunch of different t必威电竞asks poorly,I would rather buy a bunch of specialized narrow AIs.Narrow AI #1 drives cars,Narrow AI #2 plays Go,Narrow AI #3 bake cakes,etc.That being said,this is a very brittle approach,since if some unexpected t必威电竞ask comes up,none of my narrow AIs would be able to handle it.I'm willing to accept that risk though.

Is my "thinking" correct?Is there a trade-off between flexibility (AGI) and efficiency (narrow AI),like what I have just described above?Or is it theoretically possible for an AGI to be both flexible and efficient?