{"id":543666,"date":"2018-10-17T21:26:50","date_gmt":"2018-10-18T04:26:50","guid":{"rendered":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/?post_type=msr-research-item&#038;p=543666"},"modified":"2018-10-17T21:28:15","modified_gmt":"2018-10-18T04:28:15","slug":"how-to-make-the-gradients-small-stochastically-even-faster-convex-and-nonconvex-sgd","status":"publish","type":"msr-research-item","link":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/publication\/how-to-make-the-gradients-small-stochastically-even-faster-convex-and-nonconvex-sgd\/","title":{"rendered":"How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD"},"content":{"rendered":"<div class=\"dateline\">Submitted on 8 Jan 2018 (<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1801.02982v1\">v1<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>), last revised 12 Jun 2018 (this version, v2))<\/div>\n<blockquote class=\"abstract mathjax\"><p>Stochastic gradient descent (SGD) gives an optimal convergence rate when minimizing convex stochastic objectives <span id=\"MathJax-Element-1-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-1\" class=\"math\"><span id=\"MathJax-Span-2\" class=\"mrow\"><span id=\"MathJax-Span-3\" class=\"mi\">f<\/span><span id=\"MathJax-Span-4\" class=\"mo\">(<\/span><span id=\"MathJax-Span-5\" class=\"mi\">x<\/span><span id=\"MathJax-Span-6\" class=\"mo\">)<\/span><\/span><\/span><\/span>. However, in terms of making the gradients small, the original SGD does not give an optimal rate, even when <span id=\"MathJax-Element-2-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-7\" class=\"math\"><span id=\"MathJax-Span-8\" class=\"mrow\"><span id=\"MathJax-Span-9\" class=\"mi\">f<\/span><span id=\"MathJax-Span-10\" class=\"mo\">(<\/span><span id=\"MathJax-Span-11\" class=\"mi\">x<\/span><span id=\"MathJax-Span-12\" class=\"mo\">)<\/span><\/span><\/span><\/span> is convex.<br \/>\nIf <span id=\"MathJax-Element-3-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-13\" class=\"math\"><span id=\"MathJax-Span-14\" class=\"mrow\"><span id=\"MathJax-Span-15\" class=\"mi\">f<\/span><span id=\"MathJax-Span-16\" class=\"mo\">(<\/span><span id=\"MathJax-Span-17\" class=\"mi\">x<\/span><span id=\"MathJax-Span-18\" class=\"mo\">)<\/span><\/span><\/span><\/span> is convex, to find a point with gradient norm <span id=\"MathJax-Element-4-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-19\" class=\"math\"><span id=\"MathJax-Span-20\" class=\"mrow\"><span id=\"MathJax-Span-21\" class=\"mi\">\u03b5<\/span><\/span><\/span><\/span>, we design an algorithm SGD3 with a near-optimal rate <span id=\"MathJax-Element-5-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-22\" class=\"math\"><span id=\"MathJax-Span-23\" class=\"mrow\"><span id=\"MathJax-Span-24\" class=\"texatom\"><span id=\"MathJax-Span-25\" class=\"mrow\"><span id=\"MathJax-Span-26\" class=\"munderover\"><span id=\"MathJax-Span-27\" class=\"mi\">O<\/span><span id=\"MathJax-Span-28\" class=\"mo\">~<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-29\" class=\"mo\">(<\/span><span id=\"MathJax-Span-30\" class=\"msubsup\"><span id=\"MathJax-Span-31\" class=\"mi\">\u03b5<\/span><span id=\"MathJax-Span-32\" class=\"texatom\"><span id=\"MathJax-Span-33\" class=\"mrow\"><span id=\"MathJax-Span-34\" class=\"mo\">\u2212<\/span><span id=\"MathJax-Span-35\" class=\"mn\">2<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-36\" class=\"mo\">)<\/span><\/span><\/span><\/span>, improving the best known rate <span id=\"MathJax-Element-6-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-37\" class=\"math\"><span id=\"MathJax-Span-38\" class=\"mrow\"><span id=\"MathJax-Span-39\" class=\"mi\">O<\/span><span id=\"MathJax-Span-40\" class=\"mo\">(<\/span><span id=\"MathJax-Span-41\" class=\"msubsup\"><span id=\"MathJax-Span-42\" class=\"mi\">\u03b5<\/span><span id=\"MathJax-Span-43\" class=\"texatom\"><span id=\"MathJax-Span-44\" class=\"mrow\"><span id=\"MathJax-Span-45\" class=\"mo\">\u2212<\/span><span id=\"MathJax-Span-46\" class=\"mn\">8<\/span><span id=\"MathJax-Span-47\" class=\"texatom\"><span id=\"MathJax-Span-48\" class=\"mrow\"><span id=\"MathJax-Span-49\" class=\"mo\">\/<\/span><\/span><\/span><span id=\"MathJax-Span-50\" class=\"mn\">3<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-51\" class=\"mo\">)<\/span><\/span><\/span><\/span> of [17].<br \/>\nIf <span id=\"MathJax-Element-7-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-52\" class=\"math\"><span id=\"MathJax-Span-53\" class=\"mrow\"><span id=\"MathJax-Span-54\" class=\"mi\">f<\/span><span id=\"MathJax-Span-55\" class=\"mo\">(<\/span><span id=\"MathJax-Span-56\" class=\"mi\">x<\/span><span id=\"MathJax-Span-57\" class=\"mo\">)<\/span><\/span><\/span><\/span> is nonconvex, to find its <span id=\"MathJax-Element-8-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-58\" class=\"math\"><span id=\"MathJax-Span-59\" class=\"mrow\"><span id=\"MathJax-Span-60\" class=\"mi\">\u03b5<\/span><\/span><\/span><\/span>-approximate local minimum, we design an algorithm SGD5 with rate <span id=\"MathJax-Element-9-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-61\" class=\"math\"><span id=\"MathJax-Span-62\" class=\"mrow\"><span id=\"MathJax-Span-63\" class=\"texatom\"><span id=\"MathJax-Span-64\" class=\"mrow\"><span id=\"MathJax-Span-65\" class=\"munderover\"><span id=\"MathJax-Span-66\" class=\"mi\">O<\/span><span id=\"MathJax-Span-67\" class=\"mo\">~<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-68\" class=\"mo\">(<\/span><span id=\"MathJax-Span-69\" class=\"msubsup\"><span id=\"MathJax-Span-70\" class=\"mi\">\u03b5<\/span><span id=\"MathJax-Span-71\" class=\"texatom\"><span id=\"MathJax-Span-72\" class=\"mrow\"><span id=\"MathJax-Span-73\" class=\"mo\">\u2212<\/span><span id=\"MathJax-Span-74\" class=\"mn\">3.5<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-75\" class=\"mo\">)<\/span><\/span><\/span><\/span>, where previously SGD variants only achieve <span id=\"MathJax-Element-10-Frame\" class=\"MathJax\" tabindex=\"0\"><span id=\"MathJax-Span-76\" class=\"math\"><span id=\"MathJax-Span-77\" class=\"mrow\"><span id=\"MathJax-Span-78\" class=\"texatom\"><span id=\"MathJax-Span-79\" class=\"mrow\"><span id=\"MathJax-Span-80\" class=\"munderover\"><span id=\"MathJax-Span-81\" class=\"mi\">O<\/span><span id=\"MathJax-Span-82\" class=\"mo\">~<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-83\" class=\"mo\">(<\/span><span id=\"MathJax-Span-84\" class=\"msubsup\"><span id=\"MathJax-Span-85\" class=\"mi\">\u03b5<\/span><span id=\"MathJax-Span-86\" class=\"texatom\"><span id=\"MathJax-Span-87\" class=\"mrow\"><span id=\"MathJax-Span-88\" class=\"mo\">\u2212<\/span><span id=\"MathJax-Span-89\" class=\"mn\">4<\/span><\/span><\/span><\/span><span id=\"MathJax-Span-90\" class=\"mo\">)<\/span><\/span><\/span><\/span> [6, 15, 32]. This is no slower than the best known stochastic version of Newton&#8217;s method in all parameter regimes [29].<\/p><\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Submitted on 8 Jan 2018 (v1), last revised 12 Jun 2018 (this version, v2)) Stochastic gradient descent (SGD) gives an optimal convergence rate when minimizing convex stochastic objectives f(x). However, in terms of making the gradients small, the original SGD does not give an optimal rate, even when f(x) is convex. If f(x) is convex, [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Zeyuan Allen-Zhu","user_id":"36569"}],"msr_publishername":"","msr_publisher_other":"","msr_booktitle":"","msr_chapter":"","msr_edition":"","msr_editors":"","msr_how_published":"","msr_isbn":"","msr_issue":"","msr_journal":"","msr_number":"","msr_organization":"","msr_pages_string":"","msr_page_range_start":"","msr_page_range_end":"","msr_series":"","msr_volume":"","msr_copyright":"","msr_conference_name":"NIPS 2018","msr_doi":"","msr_arxiv_id":"","msr_s2_paper_id":"","msr_mag_id":"","msr_pubmed_id":"","msr_other_authors":"","msr_other_contributors":"","msr_speaker":"","msr_award":"","msr_affiliation":"","msr_institution":"","msr_host":"","msr_version":"","msr_duration":"","msr_original_fields_of_study":"","msr_release_tracker_id":"","msr_s2_match_type":"","msr_citation_count_updated":"","msr_published_date":"2018-12-2","msr_highlight_text":"","msr_notes":"","msr_longbiography":"","msr_publicationurl":"","msr_external_url":"","msr_secondary_video_url":"","msr_conference_url":"","msr_journal_url":"","msr_s2_pdf_url":"","msr_year":0,"msr_citation_count":0,"msr_influential_citations":0,"msr_reference_count":0,"msr_s2_match_confidence":0,"msr_microsoftintellectualproperty":true,"msr_s2_open_access":false,"msr_s2_author_ids":[],"msr_pub_ids":[],"msr_hide_image_in_river":0,"footnotes":""},"msr-research-highlight":[],"research-area":[13561,13556,13546],"msr-publication-type":[193716],"msr-publisher":[],"msr-focus-area":[],"msr-locale":[268875],"msr-post-option":[],"msr-field-of-study":[],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-543666","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-research-area-computational-sciences-mathematics","msr-locale-en_us"],"msr_publishername":"","msr_edition":"","msr_affiliation":"","msr_published_date":"2018-12-2","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/arxiv.org\/abs\/1801.02982","label_id":"243109","label":0}],"msr_related_uploader":"","msr_citation_count":0,"msr_citation_count_updated":"","msr_s2_paper_id":"","msr_influential_citations":0,"msr_reference_count":0,"msr_arxiv_id":"","msr_s2_author_ids":[],"msr_s2_open_access":false,"msr_s2_pdf_url":null,"msr_attachments":[],"msr-author-ordering":[{"type":"user_nicename","value":"Zeyuan Allen-Zhu","user_id":36569,"rest_url":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Zeyuan Allen-Zhu"}],"msr_impact_theme":[],"msr_research_lab":[199565],"msr_event":[508112],"msr_group":[],"msr_project":[],"publication":[],"video":[],"msr-tool":[],"msr_publication_type":"inproceedings","related_content":[],"_links":{"self":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/543666","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":1,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/543666\/revisions"}],"predecessor-version":[{"id":543669,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/543666\/revisions\/543669"}],"wp:attachment":[{"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=543666"}],"wp:term":[{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=543666"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=543666"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=543666"},{"taxonomy":"msr-publisher","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-publisher?post=543666"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=543666"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=543666"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=543666"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=543666"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=543666"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=543666"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=543666"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/cm-edgetun.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=543666"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}