We propose the nuclear norm penalty as an alternative to the ridge penalty for regularized multinomial regression. This convex relaxation of reduced-rank multinomial regression has the advantage of leveraging underlying structure among the response categories to make better predictions. We apply our method, nuclear penalized multinomial regression (NPMR), to Major League Baseball play-by-play data to predict outcome probabilities based on batter-pitcher matchups. The interpretation of the results meshes well with subject-area expertise and also suggests a novel understanding of what differentiates players.
“In this paper specifically, we have tried to predict the [ranking] of the player in the ultimate survival test,” the project’s contributors wrote in a preprint paper (“Survival of the Fittest in PlayerUnknown’s BattleGrounds“) published on Arxiv.org. “We have applied multiple machine learning models to find the optimum prediction.”
Berg and Ulfberg and Amano and Maruoka have used CNF-DNF-approximators to prove exponential lower bounds for the monotone network complexity of the clique function and of Andreev’s function. We show that these approximators can be used to prove the same lower bound for their non-monotone network complexity. This implies P not equal NP.
More background at: The P-versus-NP page
This page collects links around papers that try to settle the “P versus NP” question (in either way). Here are some links that explain/discuss this question:
These patterns explore the forces that encourage the emergence of a BIG BALL OF MUD, and the undeniable effectiveness of this approach to software architecture. What are the people who build them doing right? If more high-minded architectural approaches are to compete, we must understand what the forces that lead to a BIG BALL OF MUD are, and examine alternative ways to resolve them.
A number of additional patterns emerge out of the BIG BALL OF MUD. We discuss them in turn. Two principal questions underlie these patterns: Why are so many existing systems architecturally undistinguished, and what can we do to improve them?
Source: Big Ball of Mud
HTTPS may introduce overhead in terms of infrastructure costs, communication latency, data usage, and energy consumption. Moreover, given the opaqueness of the encrypted communication, any in-network value added services requiring visibility into application layer content, such as caches and virus scanners, become ineffective.
This paper proposes a new microblogging architecture based on peer-to-peer networks overlays. The proposed platform is comprised of three mostly independent overlay networks. The first provides distributed user registration and authentication and is based on the Bitcoin protocol. The second one is a Distributed Hash Table DHT overlay network providing key/value storage for user resources and tracker location for the third network. The last network is a collection of possibly disjoint “swarms” of followers, based on the Bittorrent protocol, which can be used for efficient near-instant notification delivery to many users. By leveraging from existing and proven technologies, twister provides a new microblogging platform offering security, scalability and privacy features. A mechanism provides incentive for entities that contribute processing time to run the user registration network, rewarding such entities with the privilege of sending a single unsolicited “promoted” message to the entire network. The number of unsolicited messages per day is defined in order to not upset users.
The white paper usefully explains the relationship with software-defined networking (SDN): “Network Functions Virtualisation aligns closely with the SDN objectives to use commodity servers and switches,” but importantly notes that NFV “goals can be achieved using non-SDN mechanisms.”
The white paper is available right here.
I find that although large companies tend to dominate patent headlines, most unique defendants to troll suits are small. Companies with less than $100M annual revenue represent at least 66% of unique defendants and 55% of unique defendants in PAE suits make under $10M per year. Suing small companies appears distinguish PAEs from operating companies, who sued companies with less than $10M per year of revenue only 16% of the time, based on unique defendants. Based on survey responses, the smaller the company, the more likely it was to report a significant operational impact. A large percentage of responders reported a “significant operational impact”: delayed hiring or achievement of another milestone, change in the product, a pivot in business strategy, shutting down a business line or the entire business, and/or lost valuation. To the extent patent demands tax innovation, then, they appear to do so regressively, with small companies targeted more as unique defendants , and paying more in time, money and operational impact, relative to their size, than large firms.
In this document we only discuss main Linux problems and deficiencies while everyone should keep in mind that there are areas where Linux has excelled other OSes (excellent package management, usually excellent stability, no widely circulating viruses/malware, complete system reinstallation is not required, free as a beer).
This is not a Windows vs. Linux comparison however sometimes I make comparisons to Windows or MacOS as the point of reference (after all their market penetration is in an order of magnitude higher).
This white paper provides information on general best practices, network protections, and attack identification techniques that operators and administrators can use for implementations of the Domain Name System (DNS) protocol.