Added pres

This commit is contained in:
Alexander Munch-Hansen 2019-12-05 15:39:15 +01:00
parent 059387bbd6
commit 5a830e2ade
7 changed files with 172 additions and 0 deletions

View File

@ -0,0 +1,4 @@
package dk.au.pir.protocols.balancedBlockScheme;
public class balancedBlockClient {
}

View File

@ -0,0 +1,4 @@
package dk.au.pir.protocols.balancedBlockScheme;
public class balancedBlockServer {
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

164
pres/pres.tex Normal file
View File

@ -0,0 +1,164 @@
\documentclass{beamer}
\setbeamertemplate{note page}[plain]
\usetheme[progressbar=frametitle]{metropolis}
\usepackage{pgfpages}
\usepackage[final]{pdfpages}
\setbeameroption{show notes on second screen=right}
% g \in G is explanation as a model
% f is the model we're trying to explain
% does, being model agnostic, means we do not care about specifics of f.
% We use Locally Weighted Square Loss as L, where I suspect pi is the weight and we thus estimate the difference between the actual model
% and our explanation, and multiply this with the proximity of the data point z, to x.
% Spørg lige Lasse hvorfor min(L(f,g,pi_x(z)) + omega(g)) bliver intractable, når omega(g) er en konstant!
\usepackage{dirtytalk}
\usepackage{bbm}
\usepackage{setspace}
\usepackage[T1]{fontenc}
\usepackage[sfdefault,scaled=.85]{FiraSans}
%\usepackage{newtxsf}
\usepackage[ruled, linesnumbered]{algorithm2e}
\SetKwInput{kwRequire}{Require}
\SetKw{kwExpl}{explain}
\title{Private Information Retrieval}
\subtitle{Transfering data in a sneaky way}
\author{Casper Vestergaard Kristensen \and Thomas Carlsen \and Alexander Munch-Hansen}
\institute{Aarhus University}
\date{\today}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\begin{frame}
\setbeamertemplate{section in toc}[sections numbered]
\frametitle{Outline}
\setstretch{0.5}
\tableofcontents
\end{frame}
\section{Background}
\subsection{Introduction}
\begin{frame}
\frametitle{What have we done?}
\begin{itemize}
\item We have implemented several protocols, which we will briefly discuss
\item We have tested these protocols on multiple setups
\begin{itemize}
\item Changing server size
\item Amount of databases
\item The block size
\end{itemize}
\item We have benchmarked on the same parameters
\item Reached the conclusion again, that oftentimes big-O notation seldomly gives the correct, most practical, result.
\end{itemize}
\end{frame}
\subsection{Protocols}
\subsubsection{Simple}
\begin{frame}
\frametitle{The most simple protocol}
\begin{block}{}
\begin{columns}[onlytextwidth,T]
\column{\dimexpr\linewidth-40mm-5mm}
\begin{itemize}
\item Most simple PIR protocol
\item Client has to send a total of $1$ bit and has to receive $n$ bits
\item Server has to send $n$ bits and receive $1$ bit
\item Client can then figure out what data he wants
\end{itemize}
\column{40mm}
\includegraphics[width=40mm]{graphics/simple_protocol.png}
\end{columns}
\end{block}
\end{frame}
\subsubsection{XOR-based}
\begin{frame}
\frametitle{Less simple protocol for $2$ databases}
\begin{block}{}
\begin{columns}[onlytextwidth,T]
\column{\dimexpr\linewidth-50mm-5mm}
\setstretch{0.9}
\begin{itemize}
\item Less simple PIR protocol
\item Client has to worst case send $2n$ bits
\begin{itemize}
\item Expected is only on $n$ bits though
\item Has to do quite a bit of work though, sampling randomness
\end{itemize}
\item Client receives only $1$ bit from each server though
\item Server has to send $1$ bit and receive worst-case $2n$ bits
\item Server has to compute a lot of XORs though
\item Client can then XOR the results from the two servers
\end{itemize}
\column{60mm}
\includegraphics[width=70mm]{graphics/less_simple_protocol.png}
\end{columns}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Improving the previous scheme, TODO!}
\includegraphics[width=\textwidth]{graphics/balancedScheme.png}
\end{frame}
\subsubsection{Interpolation based}
\begin{frame}
\frametitle{Interpoly scheme}
Won't introduce again, however, we expect it to be worse in almost all metrics:
\begin{itemize}
\item We have to send BigIntegers from client to server, as the scheme relies on large polynomials
\item We have to send either all of the random sequences or the seed from which they originate
\begin{itemize}
\item This can be seen as a balancing act. If sequences are sent, server does not have to compute, but heavy on bandwidth
\item If seed is sent, low on bandwidth but the server also has to compute the sequences
\end{itemize}
\item In general, all of the computations regarding the polynomials, are likely to slow down the response time of the servers
\end{itemize}
\end{frame}
\section{Expected Results}
\begin{frame}
\frametitle{Overall expected results}
\begin{itemize}
\item We expect the scheme which we have yet to implement, to perform the best
\begin{itemize}
\item The client has to sent less, so less bandwidth
\item The client has to compute less
\item But the server has to compute and send more, which is acceptable, as we expect server to be stronger than client
\end{itemize}
\item We expect the simple scheme of $2$ databases to be outperformed by the scheme where the server simply sends the entire database
\begin{itemize}
\item This is due to the client still sending expected $n$ bits, but both server and client has to perform a computation
\item Client has to compute randomness
\item Server has to XOR
\end{itemize}
\item We expect the Interpoly scheme to be the worst in all regards, as mentioned in previous slide
\end{itemize}
\end{frame}
\section{Results}
\begin{frame}
\frametitle{Initial Results}
\end{frame}
\end{document}