CS计算机代考程序代写 Excel algorithm [1412.6980] Adam: A Method for Stochastic Optimization

[1412.6980] Adam: A Method for Stochastic Optimization

Global Survey

In just 3 minutes, help us better understand how you perceive arXiv.
Take the survey

TAKE SURVEY

Skip to main content

We gratefully acknowledge support from
the Simons Foundation and member institutions.

arXiv.org > cs > arXiv:1412.6980

Help | Advanced Search

All fields
Title
Author
Abstract
Comments
Journal reference
ACM classification
MSC classification
Report number
arXiv identifier
DOI
ORCID
arXiv author ID
Help pages
Full text

Search

GO

quick links

Login
Help Pages
About

Computer Science > Machine Learning

arXiv:1412.6980 (cs)

[Submitted on 22 Dec 2014 (v1), last revised 30 Jan 2017 (this version, v9)]

Title:Adam: A Method for Stochastic Optimization

Authors:Diederik P. Kingma, Jimmy Ba

Download PDF

Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization
of stochastic objective functions, based on adaptive estimates of lower-order
moments. The method is straightforward to implement, is computationally
efficient, has little memory requirements, is invariant to diagonal rescaling
of the gradients, and is well suited for problems that are large in terms of
data and/or parameters. The method is also appropriate for non-stationary
objectives and problems with very noisy and/or sparse gradients. The
hyper-parameters have intuitive interpretations and typically require little
tuning. Some connections to related algorithms, on which Adam was inspired, are
discussed. We also analyze the theoretical convergence properties of the
algorithm and provide a regret bound on the convergence rate that is comparable
to the best known results under the online convex optimization framework.
Empirical results demonstrate that Adam works well in practice and compares
favorably to other stochastic optimization methods. Finally, we discuss AdaMax,
a variant of Adam based on the infinity norm.

Comments: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015
Subjects:
Machine Learning (cs.LG)
Cite as: arXiv:1412.6980 [cs.LG]
  (or
arXiv:1412.6980v9 [cs.LG] for this version)

Submission history
From: Diederik P Kingma M.Sc. [view email]

[v1]
Mon, 22 Dec 2014 13:54:29 UTC (280 KB)

[v2]
Sat, 17 Jan 2015 20:26:06 UTC (283 KB)

[v3]
Fri, 27 Feb 2015 21:04:48 UTC (289 KB)

[v4]
Tue, 3 Mar 2015 17:51:27 UTC (289 KB)

[v5]
Thu, 23 Apr 2015 16:46:07 UTC (289 KB)

[v6]
Tue, 23 Jun 2015 19:57:17 UTC (958 KB)

[v7]
Mon, 20 Jul 2015 09:43:23 UTC (519 KB)

[v8]
Thu, 23 Jul 2015 20:27:47 UTC (526 KB)
[v9]
Mon, 30 Jan 2017 01:27:54 UTC (490 KB)

Full-text links:
Download:

PDF
Other formats

(license)

Current browse context: cs.LG

< prev   |   next >

new
|
recent
|
1412

Change to browse by:

cs

References & Citations

NASA ADS
Google Scholar
Semantic Scholar

38 blog links
(what is this?)

DBLP – CS Bibliography

listing | bibtex

Diederik P. Kingma
Jimmy Ba

a

export bibtex citation
Loading…

Bibtex formatted citation

×

loading…

Data provided by:

Bookmark

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Code & Data

Code and Data Associated with this Article

arXiv Links to Code Toggle

arXiv Links to Code & Data (What is Links to Code & Data?)

Related Papers

Recommenders and Search Tools

Connected Papers Toggle

Connected Papers (What is Connected Papers?)

Core recommender toggle

CORE Recommender (What is CORE?)

About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv’s community? Learn more about arXivLabs and how to get involved.

Which authors of this paper are endorsers? |
Disable MathJax (What is MathJax?)

About
Help

Click here to contact arXiv
Contact

Click here to subscribe
Subscribe

Copyright
Privacy Policy

Web Accessibility Assistance

arXiv Operational Status

Get status notifications via
email
or slack